Method_opt parameter statistics

Hi all.

Regarding my discussion collect statistics fails.

BEGIN

DBMS_STATS. GATHER_TABLE_STATS (ownname = > 'TBMSBIZM', tabname = > 'WEEKSDATA', to = > '8')

estimate_percent = > DBMS_STATS. AUTO_SAMPLE_SIZE, method_opt = > ' for all THE INDEXES

FOR ALL INDEXES of the COLUMNS, CASCADE = > TRUE);

END;

The question was, there was a line break after "for all INDEXES. When I gave him correctly, it worked. In other words, the following code worked for me.

BEGIN

DBMS_STATS. GATHER_TABLE_STATS (ownname = > 'TBMSBIZM', tabname = > 'WEEKSDATA', to = > '8')

estimate_percent = > DBMS_STATS. AUTO_SAMPLE_SIZE, method_opt = > 'for all THE INDEXES for all INDEXED COLUMNS", CASCADE = > TRUE);

END;

But even once, I saw no reference to this option of the method_opt parameter. But this code works fine in production.

Excerpt from the oracle documentation.

Accepts one of the following options or both in combination:

  • FOR ALL [INDEXED | HIDDEN] COLUMNS[size_clause]
  • FOR COLUMNS [size clause] column [size_clause] [,column [size_clause]...]

http://docs.oracle.com/cd/E11882_01/appdev.112/e40758/d_stats.htm#ARPLS68582

According to this documentation, the code above should not work.

Experts, what comments do you have on this...

as far as I can see for all THE INDEX doesn't help, but does not result in a parse error:

-11.2.0.1.

drop table t;

create table t

as

Select rownum id

, mod (rownum, 2) col1

, mod (rownum, 5) col2

, mod (rownum, 10) col3

of the double

connect by level<=>

create index t_idx1 on t (id);

exec dbms_stats.delete_table_stats (user, 't')

exec dbms_stats.gather_table_stats (user, 't', method_opt =>' for all THE INDEXES for all INDEXED COLUMNS)

Select column_name, histogram, last_analyzed, num_distinct from user_tab_cols where table_name = 't' 1 order;

COLUMN_NAME NUM_DISTINCT LAST_ANA HISTOGRAM

------------------------------ ------------ -------- ---------------

COL1                                                 NONE

COL2                                                 NONE

COL3                                                 NONE

ID 10000 01.12.15 BALANCED HEIGHT

--> create statistics column (and histograms) just for the indexed columns

exec dbms_stats.delete_table_stats (user, 't')

exec dbms_stats.gather_table_stats (user, 't', method_opt =>' all for THE INDEX)

Select column_name, histogram, last_analyzed, num_distinct from user_tab_cols where table_name = 't' 1 order;

COLUMN_NAME NUM_DISTINCT LAST_ANA HISTOGRAM

------------------------------ ------------ -------- ---------------

COL1                                                 NONE

COL2                                                 NONE

COL3                                                 NONE

ID                                                   NONE

--> creates no column statistics

exec dbms_stats.delete_table_stats (user, 't')

exec dbms_stats.gather_table_stats (user, 't', method_opt =>' for all THE COLUMNS)

Select column_name, histogram, last_analyzed, num_distinct from user_tab_cols where table_name = 't' 1 order;

COLUMN_NAME NUM_DISTINCT LAST_ANA HISTOGRAM

------------------------------ ------------ -------- ---------------

COL1 2 01.12.15 FREQUENCY

COL2 5 FREQUENCY 01.12.15

COL3. FREQUENCY 10 01.12.15

ID 10000 01.12.15 BALANCED HEIGHT

--> creates statistics of column (and histograms) for all columns

exec dbms_stats.delete_table_stats (user, 't')

exec dbms_stats.gather_table_stats (user, 't', method_opt-online 'FOR ALL COLUMNS SIZE 1 for COLUMNS SIZE 254 COL3')

Select column_name, histogram, last_analyzed, num_distinct from user_tab_cols where table_name = 't' 1 order;

COLUMN_NAME NUM_DISTINCT LAST_ANA HISTOGRAM

------------------------------ ------------ -------- ---------------

COL1 2 01.12.15 NO

COL2 5 01.12.15 NO

COL3. FREQUENCY 10 01.12.15

ID 10000 01.12.15 NO

--> creates column statistics for all columns and a histogram for COL3

I did a few tests with other unnecessary options, but they have resulted in the expected "ORA-20000: impossible to analyze for clause:

SQL > exec dbms_stats.gather_table_stats (user, 't', method_opt =>' for TOM KYTE for all INDEXED COLUMNS)

BEGIN dbms_stats.gather_table_stats (user, 't', method_opt =>' for TOM KYTE for all INDEXED COLUMNS); END;

*

FEHLER in line 1:

ORA-20000: Impossible to analyze for clause: for TOM KYTE for all INDEXED COLUMNS

ORA-06512: in 'SYS. DBMS_STATS", line 20337

ORA-06512: in 'SYS. DBMS_STATS", line 20360

ORA-06512: In line 1

But as I already said in another of your son: most likely, you want to have histograms on columns on which they are useful - i.e. the columns with asymmetric distribution of data and statistics on all columns columns (and Yes: my example includes no oblique data and should be reconsidered...)

Concerning

Martin

Tags: Database

Similar Questions

  • DBMS_STATS. Set_global_prefs has scored zero for method_opt

    Hello

    I have a few 11.2.0.3 on windows 2K8R2 oracle databases.

    I had problems with the method_opt parameter on 'for all columns size auto' which is the default value

    so, I changed it to "repeat all sizes of columns.

    somehow he resets.

    It wasn't by me (... unless maybe I was sleepwalking, which is very doubtful)

    the question is... If oracle resets this value somewhere, or in some cases?

    Oracle will not reset this in any scenario I can think of, Geert, unless * maybe * as part of an upgrade/patch, but even in this case, I don't see what is happening. I don't expect that there is anything in the log of alerts either. Any chance of the table was lowered then recreated as part of little exercise? Maybe a delete-rows-by-CTAS-and-renaming?

    As a side point, underlines why it is a good idea to make changes to the objects via scripts and check these scripts to edit the control. No.-adhoc DDL or changes to statistics gathering methods, etc., write a script, check it into the change control and record in a log of changes somewhere (whether on paper, in a file) or - much better - in a table in the database.

  • Concept of collection of statistics

    Hi I use Oracle Database 11 g Enterprise Edition Release 11.2.0.3.0 - 64 bit Production.

    We do not want the histogram to be created on the column C1 of table TEST. To do this, I did below

    begin
    dbms_stats.delete_column_stats(
    ownname=>'SCHEMA1', tabname=>'TEST', colname=>'C1', col_stat_type=>'HISTOGRAM');
    END;
    /
    
    BEGIN
    dbms_stats.set_table_prefs('SCHEMA1', 'TEST','METHOD_OPT', 'FOR ALL COLUMNS SIZE AUTO, FOR COLUMNS SIZE 1 C1');
    END;
    /
     
    but then i see this histogram getting override during weekend stats job, which gathers schema statistics, which is as below
    
     BEGIN
       DBMS_STATS.GATHER_SCHEMA_STATS (
          OWNNAME            => 'SCHEMA1',
          ESTIMATE_PERCENT   => DBMS_STATS.AUTO_SAMPLE_SIZE,
          METHOD_OPT         => 'FOR ALL COLUMNS SIZE AUTO',
          GRANULARITY        => 'AUTO',
          CASCADE            => DBMS_STATS.AUTO_CASCADE,
          NO_INVALIDATE      => DBMS_STATS.AUTO_INVALIDATE,
          DEGREE             => 16);
    END;
    /
    
    begin
    dbms_stats.delete_column_stats(
    ownname=>'SCHEMA1', tabname=>'TEST', colname=>'C1', col_stat_type=>'HISTOGRAM');
    END;
    /
    
    BEGIN
    dbms_stats.set_table_prefs('SCHEMA1', 'TEST','METHOD_OPT', 'FOR ALL COLUMNS SIZE AUTO, FOR COLUMNS SIZE 1 C1');
    END;
    /
    but then i see this histogram getting override during weekend stats job, which gathers schema statistics, which is as below
    BEGIN
       DBMS_STATS.GATHER_SCHEMA_STATS (
          OWNNAME            => 'SCHEMA1',
          ESTIMATE_PERCENT   => DBMS_STATS.AUTO_SAMPLE_SIZE,
          --METHOD_OPT         => 'FOR ALL COLUMNS SIZE AUTO',
          GRANULARITY        => 'AUTO',
          CASCADE            => DBMS_STATS.AUTO_CASCADE,
          NO_INVALIDATE      => DBMS_STATS.AUTO_INVALIDATE,
          DEGREE             => 16);
    END;
    /
     
    
    

    But I was under the assumption that the festive preference is picked up, which was wrong. Now to address this scenario, I tested the stats diagram above commenting on the METHOD_OPT parameter and then she won the festive pereference and the histogram has not created as expected. But I'm a bit worried, if commenting on above the METHOD_OPT parameter will have a negative impact on other objects?


    Another option that I am thing is to have
    lock the stats so run pattern as usual stats, then unlock the stats and then collect statistics on table as below.

    Start
    DBMS_STATS.lock_table_stats ('XIGNCMN', 'TEST');
    end;


    BEGIN
    DBMS_STATS. () GATHER_SCHEMA_STATS
    OWNNAME = > 'SCHEMA1. "
    ESTIMATE_PERCENT = > DBMS_STATS. AUTO_SAMPLE_SIZE,
    METHOD_OPT = > 'FOR ALL COLUMNS SIZE AUTO ',.
    GRANULARITY = > 'AUTO',
    CASCADE = > DBMS_STATS. AUTO_CASCADE,
    NO_INVALIDATE = > DBMS_STATS. AUTO_INVALIDATE,
    DEGREE = > 16);
    END;
    /

    Start
    DBMS_STATS.unlock_table_stats ('SCHEMA1', 'TEST');
    end;


    BEGIN
    DBMS_STATS. () GATHER_TABLE_STATS
    OWNNAME = > 'SCHEMA1. "
    TABNAME = > 'TEST ',.
    ESTIMATE_PERCENT = > DBMS_STATS. AUTO_SAMPLE_SIZE,
    METHOD_OPT = > ' FOR ALL COLUMNS SIZE AUTO, FOR COLUMNS SIZE 1 C1. "
    GRANULARITY = > 'AUTO',
    CASCADE = > DBMS_STATS. AUTO_CASCADE,
    NO_INVALIDATE = > DBMS_STATS. AUTO_INVALIDATE,
    DEGREE = > 16);
    END;


    So my question is, whether by just commenting that the METHOD_OPT parameter will have a negative impact on other stats of the object,
    or should I go for option-2?

    "METHOD_OPT" parameter with a null value, chooses preferences set to dba_tab_stat_prefs and the tables for which preference is not set, it will choose the default value 'for all cars of column size.

    After the removal of the histogram, so we put the METHOD_OPT as NULL in the stats of scheme together work and it worked according to our condition.

  • Need details on statistics - Internals

    Hello

    I am aware of the statistics and how it helps CBO. I would like to learn more about internals of statistics, I know that Oracle
    keeps some details in the views, in the columns below and metadata Partitions is also considered.

    DBA_TABLES
    DBA_TAB_STATISTICS
    DBA_INDEXES
    DBA_IND_STATISTICS

    I know that it has details NUM_ROWS and LAST_ANALYZED DBA_TABLES and DBA_INDEXES.

    Please provide details on the data being kept in other views.

    I am aware of histograms, he maintains real details about the Distribution of data. I would like to know how the Normal statistics differs from histograms,
    It uses the same method_opt parameter with different values.

    http://docs.Oracle.com/CD/E11882_01/server.112/e16638/TOC.htm

    http://docs.Oracle.com/CD/E11882_01/server.112/e16638/stats.htm#i13546

    http://docs.Oracle.com/CD/E11882_01/server.112/e16638/stats.htm#i41587

    http://docs.Oracle.com/CD/E11882_01/server.112/e16638/optimops.htm#i37746

    Etc.

  • Gathering without histogram statistics

    DB version: 10 gr 2


    Work around a bug of ORA-600 of Metalink: collect statistics without histograms.

    10 gr 2 How can I believe statistics for a table without column is created. I know that this can be done using the METHOD_OPT parameter in the procedure dbms_stats.gather_table_stats. But I do not know how?

    Like this:

    exec dbms_stats.gather_table_stats (user, tabname => 'MYTABLE', method_opt => 'FOR ALL COLUMNS SIZE 1', estimate_percent-online 100, cascade-online TRUE);

    (I've added estimate_percent and waterfall as well)

    Hemant K Collette
    http://hemantoracledba.blogspot.com

  • Issues in setting dbms_stats

    Hello, I use oracle9i.

    I have two questions when I read this help...

    http://download.Oracle.com/docs/CD/B19306_01/AppDev.102/b14258/d_stats.htm#i1036461

    I would be grateful if someone could answer my questions.

    Question1
    =======
    Method_opt parameter.

    -AUTO: Oracle determines the columns to collect histograms based on the distribution of the data and the workload of the columns.
    -SKEWONLY: Oracle determines the columns to collect histograms based on the distribution of data in the columns.

    I understand SKEWONLY. In AUTO, determine the column based on the distribution of the data and workload of the columns... What is the workload of the column? It depends on how many executed queries based on this column? How oracle this follow-up?



    Question2
    ======
    Method_opt parameter.

    Integer SIZE

    I think I know, when we specify size 1, then it creates only a bucket. Even if we have more then a distinct value in the column.

    We will tell, we have 10 values in a column, if we specify SIZE 5, then it creates only 5 bucket. But we have 10 distinct values... How does in this scenario?

    We will tell, we have 5 distinct in a column values, if we specify the SIZE 10, then he creates 10 bucket. But we have 5 distinct values... How does in this scenario?

    There is a table called col_usage$ that stores information about the use of columns. The dbms_stats package uses to determine if not a histogram will be gathered on a given column. How it determines whether to collect histograms of data that I do not know exactly.

    size 1 means that there is no histogram, as all values are in a simple bucket.

    10 distinct values and only 5 buckets, you get the height of the balanced means histograms and you will spread evenly the values sampled in buckets.
    You will have a max and min values. The optimizer uses a formula that estimates the selectivity of a range of factors such as the value max min in a bucket, the size of the bucket, the number of values per bucket, and the popularity is of a given value (number of buckets value appears in)

    5 distinct values and 10 buckets, give you a frequency histograms. You will have only 5 buckets of each with a value of 1. There is no need to have an empty bucket. Frequency histograms are more accurate that the exact number of occurrences of a given value is known. They calculate the selectivity based on the number of occurrences of a value by bucket on the total number of occurrences of all values.

  • 11g: global statistics, once done block statistics additional

    This is a follow-up question 11g: additional statistics - effects of bad implementation

    In a system, who should use additional statistics, global statistical imagined fault someone intentionally.

    These aggregate statistics seems to stop to move to additional statistics.

    As a special field table user_tab_col_statistics has to mean here.


    See this demo program:

    SET LINESIZE 130
    set serveroutput on
    
    call dbms_output.put_line('drop and re-create table ''T''...');
    
    -- clean up
    DROP TABLE t;
    
    -- create a table with partitions and subpartitions
    CREATE TABLE t (
      tracking_id       NUMBER(10),
      partition_key     DATE,
      subpartition_key  NUMBER(3),
      name              VARCHAR2(20 CHAR),
      status            NUMBER(1)
    )
    ENABLE ROW MOVEMENT
    PARTITION BY RANGE(partition_key) SUBPARTITION BY HASH(subpartition_key)
    SUBPARTITION TEMPLATE 4
    (
      PARTITION P_DATE_TO_20160101 VALUES LESS THAN (to_date('2016-01-01', 'YYYY-MM-DD')),
      PARTITION P_DATE_TO_20160108 VALUES LESS THAN (to_date('2016-01-08', 'YYYY-MM-DD')),
      PARTITION P_DATE_TO_20160115 VALUES LESS THAN (to_date('2016-01-15', 'YYYY-MM-DD')),
      PARTITION P_DATE_OTHER VALUES LESS THAN (MAXVALUE)
    );
    
    CREATE UNIQUE INDEX t_pk ON t(partition_key, subpartition_key, tracking_id);
    
    ALTER TABLE t ADD CONSTRAINT t_pk PRIMARY KEY(partition_key, subpartition_key, tracking_id);
    ALTER TABLE t ADD CONSTRAINT t_partition_key_nn check (partition_key IS NOT NULL);
    ALTER TABLE t ADD CONSTRAINT t_subpartition_key_nn check (subpartition_key IS NOT NULL);
    
    call dbms_output.put_line('populate table ''T''...'); 
    
    -- insert values into table (100 for first 2 partitions, last 2 remain empty)
    BEGIN
      FOR i IN 1..100
      LOOP
        INSERT INTO t VALUES(i,to_date('2015-12-31', 'YYYY-MM-DD'), i, 'test' || to_char(i), MOD(i,4) );
      END LOOP;
    
      FOR i IN 101..200
      LOOP
        INSERT INTO t VALUES(i,to_date('2016-01-07', 'YYYY-MM-DD'), i, 'test2' || to_char(i), MOD(i, 4));
      END LOOP;
      commit;
    END;
    /
    
    -- lock table stats, so that no automatic mechanism of Oracle can disturb us
    call dbms_output.put_line('lock statistics for table ''T''...');
    
    BEGIN
      dbms_stats.lock_table_stats(USER, 'T');
      commit;
    END;
    /
    
    call dbms_output.put_line('delete statistics for table ''T''...');
    
    -- make sure we have no table stats to begin with
    BEGIN
      dbms_stats.delete_table_stats(
        ownname => USER,
        tabname => 'T',
        partname => NULL,
        cascade_columns => TRUE,
        cascade_indexes => TRUE,
        force => TRUE);
      commit;
    END;
    /
    
    begin
      for rec in (select table_name, partition_name from user_tab_partitions where table_name = 'T')
      loop
        dbms_stats.delete_table_stats(
           ownname         => USER,
           tabname         => rec.table_name,
           partname        => rec.partition_name,
           cascade_columns => TRUE,
           cascade_indexes => TRUE,
           force           => TRUE);
     end loop;
     commit;
    end;
    /
    
    call dbms_output.put_line('verify no data avail in user_tab_col_statistics for table ''T''...');
    
    select table_name, column_name, to_char(last_analyzed, 'YYYY-MM-DD HH:MI:SS'), global_stats from user_tab_col_statistics where table_name = 'T';
    
    call dbms_output.put_line('verify no data avail in user_part_col_statistics for table ''T''...');
    
    -- not sure, if this is correct(?!):
    select * from user_part_col_statistics where table_name = 'T' and not (num_distinct is null and low_value is null and high_value is null);
    
    call dbms_output.put_line('''accidentally'' gather global stats in user_tab_col_statistics for table ''T''...');
    
    begin
        dbms_stats.gather_table_stats(
          ownname => USER,
          tabname => 'T',
          partname => null,
          estimate_percent => 1,
          degree => 5,
          block_sample => TRUE,
          granularity => 'GLOBAL',
          cascade => TRUE,
          force => TRUE,
          method_opt => 'FOR ALL COLUMNS SIZE 1'
        );
    end;
    /
    
    call dbms_output.put_line('verify global_stats in user_tab_col_statistics for table ''T''...');
    
    select table_name, column_name, to_char(last_analyzed, 'YYYY-MM-DD HH:MI:SS'), global_stats from user_tab_col_statistics where table_name = 'T';
    
    call dbms_output.put_line('wait 30 seconds...');
    
    call dbms_lock.sleep(30); -- might require to grant access
    
    call dbms_output.put_line('...done');
    
    call dbms_output.put_line('try to update global_stats in user_tab_col_statistics for table ''T'' by partition level statistic updates...');
    
    begin
      for rec in (select table_name, partition_name from user_tab_partitions where table_name = 'T')
      loop
        dbms_stats.gather_table_stats(
          ownname => USER,
          tabname => rec.table_name,
          partname => rec.partition_name,
          estimate_percent => 1,
          degree => 5,
          block_sample => TRUE,
          granularity => 'PARTITION',
          cascade => TRUE,
          force => TRUE,
          method_opt => 'FOR ALL COLUMNS SIZE 1'
        );
        dbms_stats.gather_table_stats(
          ownname => USER,
          tabname => rec.table_name,
          partname => rec.partition_name,
          estimate_percent => 1,
          degree => 5,
          block_sample => TRUE,
          granularity => 'SUBPARTITION',
          cascade => TRUE,
          force => TRUE,
          method_opt => 'FOR ALL COLUMNS SIZE 1');  
      end loop;
    end;
    /
    
    call dbms_output.put_line('re-verify global_stats in user_tab_col_statistics for table ''T'' (check for last_analyzed and global_stats)...');
    
    select table_name, column_name, to_char(last_analyzed, 'YYYY-MM-DD HH:MI:SS'), global_stats from user_tab_col_statistics where table_name = 'T';
    

    Output:

    Call completed.
    
    drop and re-create table 'T'...
    
    
    Table T dropped.
    
    
    Table T created.
    
    
    Unique index T_PK created.
    
    Table T altered.
    
    
    Table T altered.
    
    
    Table T altered.
    
    
    Call completed.
    
    populate table 'T'...
    
    
    PL/SQL procedure successfully completed.
    
    Call completed.
    
    lock statistics for table 'T'...
    
    
    PL/SQL procedure successfully completed.
    
    
    Call completed.
    
    delete statistics for table 'T'...
    
    
    PL/SQL procedure successfully completed.
    
    
    PL/SQL procedure successfully completed.
    
    
    Call completed.
    
    verify no data avail in user_tab_col_statistics for table 'T'...
    
    
    no rows selected
    
    
    
    Call completed.
    
    verify no data avail in user_part_col_statistics for table 'T'...
    
    
    no rows selected
    
    
    Call completed.
    
    'accidentally' gather global stats in user_tab_col_statistics for table 'T'...
    
    
    PL/SQL procedure successfully completed.
    
    
    Call completed.
    
    verify global_stats in user_tab_col_statistics for table 'T'...
    
    
    TABLE_NAME                     COLUMN_NAME                    TO_CHAR(LAST_ANALYZ GLO
    ------------------------------ ------------------------------ ------------------- ---
    T                              TRACKING_ID                    2016-01-28 02:09:31 YES
    T                              PARTITION_KEY                  2016-01-28 02:09:31 YES
    T                              SUBPARTITION_KEY               2016-01-28 02:09:31 YES
    T                              NAME                           2016-01-28 02:09:31 YES
    T                              STATUS                         2016-01-28 02:09:31 YES
    
    Call completed.
    
    wait 30 seconds...
    
    Call completed.
    
    
    Call completed.
    
    ...done
    
    
    Call completed.
    
    try to update global_stats in user_tab_col_statistics for table 'T' by partition level statistic updates...
    
    PL/SQL procedure successfully completed.
    
    
    Call completed.
    
    re-verify global_stats in user_tab_col_statistics for table 'T' (check for last_analyzed and global_stats)...
    
    
    TABLE_NAME                     COLUMN_NAME                    TO_CHAR(LAST_ANALYZ GLO
    ------------------------------ ------------------------------ ------------------- ---
    T                              TRACKING_ID                    2016-01-28 02:09:31 YES
    T                              PARTITION_KEY                  2016-01-28 02:09:31 YES
    T                              SUBPARTITION_KEY               2016-01-28 02:09:31 YES
    T                              NAME                           2016-01-28 02:09:31 YES
    T                              STATUS                         2016-01-28 02:09:31 YES
    
    

    seems that the solution is to use the parameter cascade_parts-the online of FALSE:

    begin
      dbms_stats.delete_table_stats(
        ownname => USER,
        tabname => 'T',
        partname => NULL,
        cascade_parts => FALSE, -- this is important
        cascade_columns => TRUE,
        cascade_indexes => TRUE,
        force => TRUE);
    end;
    
  • collect statistics of the table

    Hi all

    DB version: 10.2.0.4
    OS: AIX 6.1

    I want to collect the stats table for a table as the query that uses that table is slow. Also, I noticed that this table is using full table scan, and it has been analyzed in last 2 months back.

    I intend to run the query to collect the statistics below. The table has 50 million records.
    COUNT (*)
    ----------
    51364617

    I expect that this will take time if I run the query as below.


    DBMS_STATS.gather_table_stats EXEC ('schema_name', 'table_name');

    My doubts specified below.

    1. can I use the also estimate_percent parameter to collect statistics?

    2 should. how much we indicate for the estimate_percent parameter?

    3. what difference it will make if I use the estimate_percent parameter?

    Thanks in advance

    Published by: user13364377 on March 27, 2012 13:28

    If you are worried about the stats of collection process for a long time running, consider collecting statistics in parallel.

    1. can you use estimate_percent? Sure! Go ahead.
    2. what percentage of use? Why not let the data decide with auto_sample_size? Various "rules of thumb" were thrown around over the years, usually about 10 to 20%.
    3. what difference it will make? Very little, no doubt. Occasionally, you can see where a small sample makes a difference, but in general that it's perfectly ok to estimate its stats.

    Maybe something like this:

    BEGIN
      dbms_stats.gather_table_stats(ownname => user, tabname => 'MY_TABLE',
       estimate_percent => dbms_stats.auto_sample_size, method_opt=>'for all columns size auto',
       cascade=>true,degree=>8);
    END;
    
  • Parameter of DBMS_STATS. Procedure GATEHER_SCHEMA_STATS

    Hello

    I use the Oracle 10.2.0.3 database. When I do a describe on DBMS_STATS statement then I get below output.

    PROCEDURE GATHER_SCHEMA_STATS
    The argument name
    ----------------
    OWNNAME
    ESTIMATE_PERCENT
    BLOCK_SAMPLE
    METHOD_OPT
    DEGREE
    GRANULARITY
    CASCADE
    STATTAB
    STATID
    Options
    STATOWN
    NO_INVALIDATE
    GATHER_TEMP
    GATHER_FIXED
    STATTYPE
    FORCE

    For both parameters GATHER_TEMP and GATHER_FIXED, I couldn't find explanation in the documentation.

    Any idea what these setting are made?

    Thanks in advance,


    Best regards
    oratest

    You can find GATHER_TEMP in Oracle 9 documentation http://download.oracle.com/docs/cd/B10501_01/appdev.920/a96612/d_stats2.htm#1001434 which says:

    >
    gather_temp

    Collects statistics on global temporary tables. The temporary table must be created with the clause "on commit preserve rows. The statistics are based on data in the session where this procedure is performed, but shared in all sessions.
    >

    And it is likely that GATHER_FIXED means collect statistics on fixed objects (aka dynamism views http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_1001.htm#sthref2417).

    It is not the first time that the missing documents for certain parameters in package DBMS_STATS is reported:
    Missing parameter of STATTYPE in DMBS_STATS. GATHER_TABLE_STATS

    Edited by: P. Forstmann Nov. 29. 2010 19:50

  • Explain the plans differ as the parameter value changes

    Hi all

    My colleague posted a similar question a few days before. Happened because of some bad index. But now we are in a strange situation.

    DB:
    SQL> select * from v$version;
    
    BANNER
    ----------------------------------------------------------------
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE    10.2.0.1.0      Production
    TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    We use the query below and was working fine until 13.
    SQL> explain plan for
      2  SELECT *
      3    FROM gacc_dtl_v1  acc,
      4         gcus_dtl_v1  cus,
      5         gtxn_dtl_v1  txn
      6   WHERE txn.customer_id = cus.customer_number(+)
      7   AND txn.batch_id = cus.batch_id(+)
      8   AND txn.account_number = acc.id
      9   AND acc.batch_id = '130609'
     10   AND cus.batch_id(+) = '130609'
     11   AND txn.batch_id = '130609' AND cus.target IN ('30');
    
    Explained.
    
    SQL> select * from table(dbms_xplan.display);
    
    PLAN_TABLE_OUTPUT
    -------------------------------------------------------------------------------------------------------------
    Plan hash value: 566819363
    
    -------------------------------------------------------------------------------------------------------------
    | Id  | Operation                     | Name                | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    -------------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT              |                     |     1 |   947 |       | 16963   (1)| 00:03:24 |
    |   1 |  NESTED LOOPS                 |                     |     1 |   947 |       | 16963   (1)| 00:03:24 |
    |*  2 |   HASH JOIN                   |                     |    41 | 26322 |  9136K| 16799   (1)| 00:03:22 |
    |*  3 |    TABLE ACCESS BY INDEX ROWID| GTXN_DTL_V1         | 31055 |  8764K|       |  2430   (1)| 00:00:30 |
    |*  4 |     INDEX RANGE SCAN          | GTXN_V1_BATCHID_NDX | 60524 |       |       |   156   (2)| 00:00:02 |
    |*  5 |    TABLE ACCESS BY INDEX ROWID| GCUS_DTL_V1         |   176K|    59M|       | 10869   (1)| 00:02:11 |
    |*  6 |     INDEX RANGE SCAN          | IDX_CUS2_V1         |   198K|       |       |   527   (2)| 00:00:07 |
    |   7 |   TABLE ACCESS BY INDEX ROWID | GACC_DTL_V1         |     1 |   305 |       |     4   (0)| 00:00:01 |
    |*  8 |    INDEX RANGE SCAN           | GACC_DTL_V1_IDX     |     1 |       |       |     3   (0)| 00:00:01 |
    -------------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       2 - access("TXN"."CUSTOMER_ID"="CUS"."CUSTOMER_NUMBER" AND "TXN"."BATCH_ID"="CUS"."BATCH_ID")
       3 - filter("TXN"."CUSTOMER_ID" IS NOT NULL)
       4 - access("TXN"."BATCH_ID"='130609')
       5 - filter("CUS"."TARGET"='30')
       6 - access("CUS"."BATCH_ID"='130609')
       8 - access("TXN"."ACCOUNT_NUMBER"="ACC"."ID" AND "ACC"."BATCH_ID"='130609')
           filter(SUBSTR("TXN"."ACCOUNT_NUMBER",1,3)=SUBSTR("ACC"."ID",1,3))
    
    26 rows selected.
    It shows a hash join and nested with cost 16963 loops and gives the result in 2-3 seconds. It gives the same plan to explain even now if we use batch_id = '130609'

    Now all of a sudden from yesterday it gives different explain the plan below. Only difference in the query below is the value of batch_id
    SQL> explain plan for
      2  SELECT *
      3    FROM gacc_dtl_v1  acc,
      4         gcus_dtl_v1  cus,
      5         gtxn_dtl_v1  txn
      6   WHERE txn.customer_id = cus.customer_number(+)
      7   AND txn.batch_id = cus.batch_id(+)
      8   AND txn.account_number = acc.id
      9   AND acc.batch_id = '150609'
     10   AND cus.batch_id(+) = '150609'
     11   AND txn.batch_id = '150609' AND cus.target IN ('30');
    
    Explained.
    
    SQL> select * from table(dbms_xplan.display);
    
    PLAN_TABLE_OUTPUT
    ------------------------------------------------------------------------------------------------------------
    Plan hash value: 773603995
    
    --------------------------------------------------------------------------------------------------------
    | Id  | Operation                     | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
    --------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT              |                        |     1 |   947 |    77   (0)| 00:00:01 |
    |   1 |  NESTED LOOPS                 |                        |     1 |   947 |    77   (0)| 00:00:01 |
    |   2 |   NESTED LOOPS                |                        |     1 |   594 |    73   (0)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| GACC_DTL_V1            |     1 |   305 |     4   (0)| 00:00:01 |
    |*  4 |     INDEX RANGE SCAN          | GACC_DTL_BATCH_ID_INDX |     1 |       |     3   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS BY INDEX ROWID| GTXN_DTL_V1            |     1 |   289 |    69   (0)| 00:00:01 |
    |*  6 |     INDEX RANGE SCAN          | IDX_TXN2_V1            |   125 |       |    12   (0)| 00:00:01 |
    |*  7 |   TABLE ACCESS BY INDEX ROWID | GCUS_DTL_V1            |     1 |   353 |     4   (0)| 00:00:01 |
    |*  8 |    INDEX RANGE SCAN           | IDX_CUS3_V1            |     1 |       |     3   (0)| 00:00:01 |
    --------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       4 - access("ACC"."BATCH_ID"='150609')
       5 - filter("TXN"."CUSTOMER_ID" IS NOT NULL AND "TXN"."BATCH_ID"='150609')
       6 - access("TXN"."ACCOUNT_NUMBER"="ACC"."ID")
           filter(SUBSTR("TXN"."ACCOUNT_NUMBER",1,3)=SUBSTR("ACC"."ID",1,3))
       7 - filter("CUS"."TARGET"='30')
       8 - access("CUS"."BATCH_ID"='150609' AND "TXN"."CUSTOMER_ID"="CUS"."CUSTOMER_NUMBER")
           filter("TXN"."BATCH_ID"="CUS"."BATCH_ID")
    
    26 rows selected.
    It shows two loops nested with cost 77, but works for hours. Very very slow.. No idea what's going on...
     select i.table_name,i.index_name,index_type,c.column_name,c.column_position,e.column_expression
       from all_indexes i, all_ind_columns c,all_ind_expressions e
       where c.index_name = i.index_name
       and e.index_name(+) = i.index_name
       and i.table_name in ('GCUS_DTL_V1','GACC_DTL_V1','GTXN_DTL_V')
       order by 1,2,4
    
    TABLE_NAME     INDEX_NAME          INDEX_TYPE          COLUMN_NAME     COLUMN_POSITION     COLUMN_EXPRESSION
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    GACC_DTL_V1     GACC_DTL_BATCH_ID_INDX     NORMAL               BATCH_ID          1     
    GACC_DTL_V1     GACC_DTL_V1_IDX          NORMAL               BATCH_ID          2     
    GACC_DTL_V1     GACC_DTL_V1_IDX          NORMAL               ID               1     
    GACC_DTL_V1     GACC_DTL_V1_IDX2     FUNCTION-BASED NORMAL     SYS_NC00101$          1     SUBSTR("ID",1,3)
    GACC_DTL_V1     IDX_ACC1_V1          NORMAL               CATEGORY          1     
    GACC_DTL_V1     IDX_ACC3_V1          FUNCTION-BASED NORMAL     SYS_NC00099$          1     "CUSTOMER_NUMBER"||'.'||"LIMIT_REF"
    GACC_DTL_V1     IDX_ACC4_V1          FUNCTION-BASED NORMAL     SYS_NC00100$          1     "CUSTOMER_NUMBER"||'.000'||"LIMIT_REF"
    GACC_DTL_V1     IDX_ACC5_V1          NORMAL               POSTING_RESTRICT     1     
    GACC_DTL_V1     IDX_CUS5_V1          NORMAL               CUSTOMER_NUMBER          1     
    GACC_DTL_V1     IDX_CUS6_V1          NORMAL               LIMIT_REF          1     
    GCUS_DTL_V1     GCUS_DTL_V1_IDX1     NORMAL               CUSTOMER_NUMBER          1     
    GCUS_DTL_V1     IDX_CUS2_V1          NORMAL               BATCH_ID          1     
    GCUS_DTL_V1     IDX_CUS3_V1          NORMAL               BATCH_ID          1     
    GCUS_DTL_V1     IDX_CUS3_V1          NORMAL               CUSTOMER_NUMBER          2     
    GCUS_DTL_V1     IDX_CUS3_V1          NORMAL               INDUSTRY          4     
    GCUS_DTL_V1     IDX_CUS3_V1          NORMAL               SECTOR               3     
    GCUS_DTL_V1     IDX_CUS4_V1          FUNCTION-BASED NORMAL     SYS_NC00078$          1     SUBSTR("DATE_STAMP",1,6)
    We are also do not understand why the filter (SUBSTR ("TXN". ""»(, 1, 3) ACCOUNT_NUMBER = SUBSTR ("VAC". " ID", 1, 3)) is used in both queries.

    All tables are analyzed today.

    Please share your thoughts on this.

    Thanks in advance,
    Jac

    Jac says:

    L     H     NUM_BUCKETS     LAST_ANALYZED     SAMPLE_SIZE     HISTOGRAM
    ------------------------------------------------------------------------------------------------------------------------------------------------
    010109     311208     235     13/Jun/2009     5,343     FREQUENCY
    

    You have a histogram of frequencies on the BATCH_ID column missing at least 2 values according to your index statistics (235 buckets vs 237 separate keys).

    If the value that you use in the query is missing then this could be the explanation for the estimation of cardinality bad (since you're on pre - 10.2.0.4. In 10.2.0.4 that this behavior changes).

    The size of the sample of 5 300 lines is also very low, given the 57,000,000 lines according to the index statistics.

    You have two options (which can be combined):

    -Increase the size of the sample using a parameter explicitly estimate_percent, for example at least 10 percent

    exec DBMS_STATS. GATHER_TABLE_STATS (null, 'GACC_DTL_V1', estimate_percent-online 10, method_opt => 'FOR COLUMNS SIZE 254 BATCH_ID,' waterfall-online fake)

    -Get rid of the histogram

    exec DBMS_STATS. GATHER_TABLE_STATS (null, 'GACC_DTL_V1', method_opt-online 'FOR BATCH_ID COLUMNS SIZE 1', cascade-online fake)

    Note: Is there a particular reason why you store numbers in varchar columns? This might be the reason why Oracle believes that it must generate a histogram using the AUTO SIZE option.

    I tend to promote to remove from the histogram, but you must first verify the data if the BATCH_ID values are spread out and the histogram is reasonable:

    select
            batch_id
          , count(*)
    from
            gacc_dtl_v1
    group by
            batch_id;
    

    No constarints are at the DB level. All are processed Application level.

    Have you checked this in DBA/ALL/USER_CONSTRAINTS?

    It's also a good idea to have constraints at the level of the DB. It keeps your data consistent and quite often helps the optimizer. It allows even 10.2 and later to make things like the elimination of a join table that can make a huge difference in performance.

    Kind regards
    Randolf

    Oracle related blog stuff:
    http://Oracle-Randolf.blogspot.com/

    SQLTools ++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676 /.
    http://sourceforge.NET/projects/SQLT-pp/

    Published by: Randolf Geist on June 16, 2009 11:48

    Comment added constraints

  • AUTHID CURRENT_USER collect statistics

    Hi all

    I use Oracle Database 11 g Enterprise Edition Release 11.2.0.3.0 - 64 bit Production. I created a procedure that collects statistics from table using DBMS_STATS using AUTHID CURRENT_USER to SChema1.

    CREATE OR REPLACE PROCEDURE SCHEMA1.getstats ( p_schema IN VARCHAR2,
                                p_table  IN VARCHAR2)
    AUTHID CURRENT_USER
    AS
    BEGIN
            dbms_stats.gather_table_stats(ownname          => p_schema,
                      tabname          => p_table,
                      method_opt       => 'FOR ALL COLUMNS SIZE AUTO',
                      degree           => 8,
                      force            => true,
                      no_invalidate    => false);
    END;
    

    I'm trying to use the same procedure to collect statistics for a table in another schema. However, I get the error ORA-20000: impossible to analyze the TABLE ' ' SCHEMA2.» DETAIL_VALUE', insufficient privileges or does not exist

    EXEC getstats ( 'SCHEMA2','DETAIL_VALUE');
    

    I grant privileges on the table to SCHEMA1. But still get the error. I understand AUTHID CURRENT_USER will attempt to use as long as the procedure as own drawing, but still, I get the error. Can someone help me on this?

    GRANT ALL ON DETAIL_VALUE TO SCHEMA1
    

    It has nothing to do with authid user current and everything to do with the privileges. Grant on the table is not enough for you to analyze.

    SQL> conn u1/u1
    Connected.
    SQL> create table foo(bar number);                                               
    
    Table created.                                                                   
    
    SQL> grant all on foo to u2;                                                     
    
    Grant succeeded.                                                                 
    
    SQL> conn u2/u2
    Connected.
    SQL> exec dbms_stats.gather_table_stats('U1', 'FOO');
    BEGIN dbms_stats.gather_table_stats('U1', 'FOO'); END;                           
    
    *
    ERROR at line 1:
    ORA-20000: Unable to analyze TABLE "U1"."FOO", insufficient privileges or does
    not exist
    ORA-06512: at "SYS.DBMS_STATS", line 33859
    ORA-06512: at line 1                                                             
    
    SQL> conn / as sysdba
    Connected.                                                                       
    
    SQL> grant analyze any to u2;                                                    
    
    Grant succeeded.                                                                 
    
    SQL> conn u2/u2
    Connected.
    SQL> exec dbms_stats.gather_table_stats('U1', 'FOO');                            
    
    PL/SQL procedure successfully completed.
    
  • Collect statistics of the table in windows

    Hello

    Could you please help me to collect statistics for the tables below in Windows. I tried collection using dbms_scheduler job for a table, but it is not the collection of statistics for tables.

    TABLE_NAME OWNERNUM_ROWS BLOCKSSize

    MQRDW DWREFERENCETRACE GO 2891985937 26372904, 17.86

    TRAY PON_SERIAL 563722072 5135734 GB 8,12

    TRAY PON_PSN2358851732009064 2.67 GB

    TRAY PON_BOM_LOG 37199475 212936 601 MB

    TRAY UNIQUE_ITEM_LOTS 6633907 79710 160 MB

    TRAY PON_BOM5921377 41717328 MB

    SQL > start

    (2 dbms_scheduler.create_job)

    job_name 3 = > "SCOTT_JOB_SCHEDULE"

    job_type 4 = > "EXECUTABLE."

    5 job_action = > ' begin dbms_stats.gather_table_stats (ownname = > "TRACE3")

    6 tabname = > 'PON_PSN ',.

    estimate_percent 7 = > 30,

    Cascade 8 = > true,

    method_opt 9 = > 'for all THE COLUMNS of SIZE 1.

    10 degrees = > 5,

    11 options = > "BRINGING OUTDATED"); end;',

    12 repeat_interval = > ' freq = daily; byhour = 04; byminute = 0; bysecond = 0; »,

    13 active = > TRUE,

    14 comments = > ' custom stats collection for engine risk ");

    15 end;

    17 N

    PL/SQL procedure successfully completed .


    Total number of CPU available on this server is 8

    Oracle - 10.2.0.4.0

    OS - windows

    Kind regards

    Bala

    52 million lines?

    and how to change every day or be inserted?

    It is possible that you can never bring his stats again

    and everything will probably continue to work as expected

    read the guide to setting performance to understand why you need to collect statistics and when

    https://docs.Oracle.com/CD/E11882_01/server.112/e41573/stats.htm#PFGRF94714

    After reading this guide, you'll be ready for any question, you get all that against this team

    Sometimes you need to say 'no' and why the answer is 'no '.

  • How to avoid statistics more than questionable permanently?

    Experts in the morning

    I want to avoid EXP-00091: export statistics more than questionable permanent.

    Whenever I don't want to use statistics = none.

    Anyone can provide a permanent solution.

    SQL > select VALUE

    2 of nls_database_parameters

    3 where PARAMETER = "NLS_CHARACTERSET";

    VALUE

    ----------------------------------------

    WE8MSWIN1252

    SQL > exec dbms_stats.gather_schema_stats ('U1');

    PL/SQL procedure successfully completed.

    SQL > show parameter nls;

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    NLS_CALENDAR chain

    nls_comp BINARY string

    nls_currency string

    NLS_DATE_FORMAT chain

    nls_date_language chain

    nls_dual_currency string

    nls_iso_currency string

    nls_language string of AMERICA

    nls_length_semantics string OCTET

    string nls_nchar_conv_excp FALSE

    nls_numeric_characters chain

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    nls_sort chain

    nls_territory string of AMERICA

    nls_time_format string

    nls_time_tz_format string

    NLS_TIMESTAMP_FORMAT string

    NLS_TIMESTAMP_TZ_FORMAT string

    $ exp file = tts_exp.dmp log = tts_exp.log TRANSPORT_TABLESPACE = Y TABLESPACES = "TBS1, TBS2".

    Export: Release 11.2.0.1.0 - Production Fri Apr 3 10:04:40 2015

    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

    User name: / as sysdba

    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production

    With partitioning, OLAP, Data Mining and Real Application Testing options

    Export in US7ASCII and AL16UTF16 NCHAR character set

    Server uses WE8MSWIN1252 (possible character set conversion) character set

    Note: data in table (lines) will not be exported

    To export metadata from transportable tablespace...

    For the tablespace TBS1...

    . export cluster definitions

    . export table definitions

    . . export the table EMP

    EXP-00091: exporting more than questionable statistics.

    EXP-00091: exporting more than questionable statistics.

    EXP-00091: exporting more than questionable statistics.

    EXP-00091: exporting more than questionable statistics.

    EXP-00091: exporting more than questionable statistics.

    EXP-00091: exporting more than questionable statistics.

    . . export of table DEPT

    EXP-00091: exporting more than questionable statistics.

    EXP-00091: exporting more than questionable statistics.

    EXP-00091: exporting more than questionable statistics.

    . . export of table on PAYROLL

    EXP-00091: exporting more than questionable statistics.

    EXP-00091: exporting more than questionable statistics.

    EXP-00091: exporting more than questionable statistics.

    For the TBS2 tablespace...

    . export cluster definitions

    . export table definitions

    . export of referential integrity constraints

    . export of triggers

    . export of metadata for the transportable tablespace end

    Export completed successfully with warnings.

    Hello

    The key is in the newspapers:

    Export in US7ASCII and AL16UTF16 NCHAR character set

    Server uses WE8MSWIN1252 (possible character set conversion) character set

    Change the character set before you run the exp with one

    Export NLS_LANG = AMERICAN_AMERICA. WE8MSWIN1252

    --

    Bertrand

  • Table statistics generation takes a lot of time in 10g R2 (10.2.05)

    Hello Oracle Experts!

    Need your urgent help...

    Question - generation of Table statistics takes a lot of time in 10g R2 (10.2.05) and even ends in mnts in a different database. As well the database are hosted on the server VM, RAM, CPU, storage and database initialization parameters are the same.

    Table - size 34 GB

    Name of the table - SLPDOCTAX

    SQL - exec dbms_stats.gather_table_stats (ownname = > 'FAST', tabname = > 'SLPDOCTAX', estimate_percent = > 30, cascade = > true, level =)
    (> 5, method_opt = > 'for all COLUMNS INDEXED' granularity = > 'ALL');

    Measures taken so far-

    1. 1) checked the IO by the dd, no problems on the side of DISK command
    2. 2) use memory and CPU as normal

    Please advise what measures must be taken to solve this problem

    Dear all,

    We have identified the problem... It was the index and even has been resolved by rebuilding the index.

    Rgds

    Farvacque

  • What is advised to collect statistics for the huge tables?

    We have a staging database, some tables are huge, hundreds GB in size.  Auto stats tasks are performed, but sometimes it will miss deadlines.

    We would like to know the best practices or tips.

    Thank you.

    Improvement of the efficiency of the collection of statistics can be achieved with:

    1. Parallelism using
    2. Additional statistics

    Parallelism using

    Parallelism can be used in many ways for the collection of statistics

    1. Parallelism object intra
    2. Internal parallelism of the object
    3. Inner and Intra object jointly parallelism

    Parallelism object intra

    The DBMS_STATS package contains the DEGREE parameter. This setting controls the intra parallelism, it controls the number of parallel processes to gather statistics. By default, this parameter has the value is equal to 1. You can increase it by using the DBMS_STATS.SET_PARAM procedure. If you do not set this number, you can allow oracle to determine the optimal number of parallel processes that will be used to collect the statistics. It can be achieved if you set the DEGREE with the DBMS_STATS. Value AUTO_DEGREE.

    Internal parallelism of the object

    If you have the 11.2.0.2 version of Oracle database you can set SIMULTANEOUS preferences that are responsible for the collection of statistics, preferably. When there is TRUE value at the same TIME, Oracle uses the Scheduler and Advanced Queuing to simultaneously manage several jobs statistics. The number of parallel jobs is controlled by the JOB_QUEUE_PROCESSES parameter. This parameter must be equal to two times a number of your processor cores (if you have two CPU with 8 cores of each, then the JOB_QUEUE_PROCESSES parameter must be equal to 2 (CPU) x 8 (cores) x 2 = 32). You must set this parameter at the level of the system (ALTER SYSTEM SET...).

    Additional statistics

    This best option corresponds to a partitioned table. If the INCREMENTAL for a partitioned table parameter is set to TRUE and the DBMS_STATS. GATHER_TABLE_STATS GRANULARITY setting is set to GLOBAL and the parameter of DBMS_STATS ESTIMATE_PERCENT. GATHER_TABLE_STATS is set to AUTO_SAMPLE_SIZE, Oracle will scan only the partitions that have changes.

    For more information, read this document and DBMS_STATS

Maybe you are looking for

  • Notification Center Widgets inadmissible

    Problems with widgets on the notification Center is not not admissible. I can add and delete, but I can't click on them. For example, I can't use the calculator; its on the center of notification but not clickable. Is there any solution for this?

  • Cannot open Safari preferences

    I can't open my safari preferences to change my home page or anything else.

  • Return a cell offset from the last value that is true

    I'm working on a document where I keep track of the amount of times my pieces band a certain song (this part, I figured out), but I also want to show what we played for the last time, we play a lot of songs the performance greater than 1. I have now

  • important windows updates fail constantly

    Hello I have approximately 16 important updates, everything to do with > NET framework 4.5, 4.5.1 4.5.2. the updates always fail, I tried everything on the help page and do not know what I would do to this topic I am running Windows 7 Home Premium (x

  • Windows 7, how to extend volume C, when the unallocated disk isn't adjacent?

    I've updated since a 250 GB HDD on a 750 GB hard drive, but can not access the extra space.  To extend the volume C is greyed out. From left to right, the displayed disks are: System > C > Recovery (D) > HP_Tools > 465,75 GB unallocated I want to tha