Call dbms_stats.gather_database_stats_job_proc)

Nice day:

Oracle 11g.2

Are there negative consequences to the deactivation of this call:

Call dbms_stats.gather_database_stats_job_proc)

It consumes a lot of resources when running.  Is this necessary?  Other options to disable if it's a must "contact"?

Thank you

Aqua

Aqua,

Given the fact that this job only recalculates * statistics of fade (where more than 10 percent of the table changed), this can lead to older execution plans and therefore evil.

A better measure would be to block statistics of the tables, where you do not WANT the statistics changed, using dbms_stats.lock.

Now you try to solve a problem with a big hammer. You can get yourself hurt in the process

Sybrand Bakker
Senior Oracle DBA

Tags: Database

Similar Questions

  • (dbms_stats.gather_database_stats_job_proc) necessary for AWR snapshots?

    Hello

    It is initially questions about my databases according to the topic SQL Tuning ADDM meanings.

    His stats are needed by AWR? If this is not the case, can I turn off the gathering of stats?

    Thank you

    (10.2.0.4)

    If you do a big load, statistics on table will be very different from the actual content, which may affect the plan / performance.
    You can manually collect the stats after a big load using dbms_stats, or you can wait just for regular work dbms_stats run and do the work for you.

  • DBMS_STATS: What removed his stats?

    Hi all

    This is probably a stupid question, but I've searched high and low and have not found an adequate answer to the following: when deleted stats?

    I read that his stats table and index deleted whenever truncate you a table, but the (minimal) tests, I did confirm this. "Essays", I mean that I tried to truncate a table and then export and view the statistics of table via DBMS_STATS. EXPORT_TABLE_STATS (and I said 'minimum')... The table statistics seem to have been ignored next truncation.

    Table / index stats deleted when you truncate / drop / delete all records from a table? Or they not removed at all?

    Hello

    statistics on the table are deleted when you call dbms_stats.delete_table_stats or when you drop the table. Why you would think that Oracle would secretly remove its stats table behind your back unless you ask him to? If you have read somewhere you should post a link to that statement - we cannot judge him without the context, but these days there are a lot of stupid things posted on the internet, this could be one of them.

    You should trust what you see (your own tests) on what you hear, especially from unknown people on the internet.

    Best regards
    Nikolai

  • DBMS_STATS and LAST_ANALYZED question

    Individuals:

    Oracle 11 GR 1 material (v 11.1.0.7)
    5.7 redhat
    Do not use partitioning


    Trying to force an update on the stats for some tables. Using DBMS_STATS I thought it would be easy, but for some reason, it turns out not to be the case.

    SQL > exec dbms_stats.gather_table_stats (ownname = > 'SCOTT', estimate_percent = > null, cascade = > true, tabname = > 'NEW_TABLE');

    SQL > select last_analyzed, stale_stats from dba_tab_statistics where owner = 'SCOTT' and table_name = 'NEW_TABLE ';

    LAST_ANAL STA
    --------- ---
    1 NOVEMBER 10 NO.

    I can call dbms_stats.delete_table_stats and the Last_analyzed and Stale_stats columns disappear... I think using dbms_stats.gather_table_stats would then fill in these columns, but they remain empty.

    SQL > exec dbms_stats.gather_table_stats (ownname = > 'SCOTT', estimate_percent = > null, cascade = > true, tabname = > 'NEW_TABLE');

    SQL > select last_analyzed, stale_stats from dba_tab_statistics where owner = 'SCOTT' and table_name = 'NEW_TABLE ';

    LAST_ANAL STA
    --------- ---
    {NULL} {NULL}


    The ONLY thing I found which filled columns after the publication of the delete_table_stats, is:

    SQL > analyze table SCOTT. New_table calculates statistics;

    SQL > select last_analyzed, stale_stats from dba_tab_statistics where owner = 'SCOTT' and table_name = 'NEW_TABLE ';

    LAST_ANAL STA
    --------- ---
    NO. 6 FEBRUARY 12



    As I know, this is contrary to the way in which my world Oracle should work... I thought DBMS_STATS was the new way to bring together his stats, but am now find this package does not seem to be working properly. Am I missing something?

    The indicator for the collection of statistics in a waiting area is persistent, so it does not matter if someone is logged in right now.

    In addition, you can check if the flag is set at the level system (not only for this table).

    Lordane Iotzov
    http://iiotzov.WordPress.com/

  • Collect Stats Job Proc in 11.2.0.3

    Hello

    I would like to discuss this theme a little. What I don't know, is what works directly appeal dbms_stats.gather_database_stats_job_proc)

    It is originally at night on application performance issues, lost connections

    in dba_jobs is not, in dba_scheduler_jobs, it's false, in dba_scheduler_programs, that the program is activated and there is the call to dbms_stats.gather_database_stats_job_proc in program_action control_management_pack_access NONE so is the first line begin if prvt_advisor.is_pack_enabled('DIAGNOSTIC') then dbsnmp.bsln_internal.maintain_statistics; end; end; so will be ignored... autotask optimizer auto stats collection is enabled

    It is the vocation of autotask work * (_job_proc)? I don't know what else I need to check.

    Thank you

    I do not understand the problem: it starts because you activated. If you want the technical details, there is a fleeting background process (ABP0 if I remember correctly) that manages all of the active autotasks.

  • extracting data from the CLOB using materialized views

    Hello

    We have xml data from clob which I have a requirement to extract (~ 50 attributes) on a daily basis, so we decided to use materialized views with refreshes full (open good suggestions)

    A small snippet of code

    CREATE THE MWMRPT MATERIALIZED VIEW. TASK_INBOUND

    IMMEDIATE CONSTRUCTION

    FULL REFRESH ON DEMAND

    WITH ROWID

    AS

    SELECT M.TASK_ID, M.BO_STATUS_CD, b.*

    OF CISADM. M1_TASK m,

    XMLTABLE (' / a ' XMLPARSE PASSING ())

    CONTENT '< a > | M.BO_DATA_AREA | "< /a >."

    ) COLUMNS

    serviceDeliverySiteId varchar2 (15) PATH

    "cmPCGeneralInfo/serviceDeliverySiteId"

    serviceSequenceId varchar2 (3) PATH "cmPCGeneralInfo/serviceSequenceId"

    completedByAssignmentId varchar2 (50) PATH "completedByAssignmentId."

    Cust_id varchar2 (10) PATH "cmPCCustomerInformation/customerId,"

    ACCT_SEQ varchar2 (5) PATH "customerInformation/accountId"

    AGRMT_SEQ varchar2 (5) PATH cmPCCustomerAgreement/agreementId"."

    COLL_SEQ varchar2 (5) PATH "cmPCGeneralInfo/accountCollectionSeq"

    REVENUE_CLASS varchar2 (10) PATH "cmPCCustomerAgreement/revenueClassCode"

    REQUESTED_BY varchar2 (50) PATH ' attributes customerInformation/contactName',...~50

    This ddl ran > 20 hours and no materialized view created. There are certain limits that we have

    • Cannot create a materialized view log
    • cannot change the source as its defined provider table
    • cannot do an ETL

    DB is 11g R2

    Any ideas/suggestions are very much appreciated

    I explored a similar approach, using the following test case.

    It creates a table "MASTER_TABLE" containing 20,000 lines and a CLOB containing an XML fragment like this:

    09HOLVUF3T6VX5QUN8UBV9BRW3FHRB9JFO4TSV79R6J87QWVGN

    UUL47WDW6C63YIIBOP1X4FEEJ2Z7NCR9BDFHGSLA5YZ5SAH8Y8

    O1BU1EXLBU945HQLLFB3LUO03XPWMHBN8Y7SO8YRCQXRSWKKL4

    ...

    1HT88050QIGOPGUHGS9RKK54YP7W6OOI6NXVM107GM47R5LUNC

    9FJ1JZ615EOUIX6EKBIVOWFDYCPQZM2HBQQ8HDP3ABVJ5N1OJA

    then an intermediate table "MASTER_TABLE_XML" with the same columns with the exception of the CLOB which turns into XMLType and finally a MVIEW:

    SQL > create table master_table like

    2. Select level as id

    3, cast ('ROW' | to_char (Level) as varchar2 (30)) as the name

    4       , (

    5. Select xmlserialize (content

    XMLAGG 6)

    7 xmlelement (evalname ('ThisIsElement' | to_char (Level)), dbms_random.string ('X', 50))

    8                    )

    9 as clob dash

    10                  )

    11 double

    12 connect by level<=>

    (13) as xmlcontent

    14 double

    15 connect by level<= 20000="">

    Table created.

    SQL > call dbms_stats.gather_table_stats (user, 'MASTER_TABLE');

    Calls made.

    SQL > create table (master_table_xml)

    Identification number 2

    3, name varchar2 (30)

    4, xmlcontent xmltype

    5)

    binary xmltype 6 securefile XML column xmlcontent store

    7;

    Table created.

    SQL > create materialized view master_table_mv

    2 build postponed

    full 3 Refresh on demand

    4, as

    5. Select t.id

    6, t.nom

    7       , x.*

    master_table_xml 8 t

    9, xmltable ('/ r' in passing t.xmlcontent)

    10 columns

    11 path of varchar2 (50) ThisIsElement1 'ThisIsElement1 '.

    12, path of varchar2 (50) ThisIsElement2 'ThisIsElement2 '.

    13, path of varchar2 (50) ThisIsElement3 'ThisIsElement3 '.

    14, path of varchar2 (50) ThisIsElement4 'ThisIsElement4 '.

    15 road of varchar2 (50) ThisIsElement5 'ThisIsElement5 '.

    16, road of varchar2 (50) ThisIsElement6 'ThisIsElement6 '.

    17 road of varchar2 (50) ThisIsElement7 'ThisIsElement7 '.

    18 road of varchar2 (50) ThisIsElement8 'ThisIsElement8 '.

    19 road to varchar2 (50) ThisIsElement9 'ThisIsElement9 '.

    20, path of varchar2 (50) ThisIsElement10 'ThisIsElement10 '.

    21, road to varchar2 (50) ThisIsElement11 'ThisIsElement11 '.

    22 road of varchar2 (50) ThisIsElement12 'ThisIsElement12 '.

    23 road of varchar2 (50) ThisIsElement13 'ThisIsElement13 '.

    24, path of varchar2 (50) ThisIsElement14 'ThisIsElement14 '.

    25 road of varchar2 (50) ThisIsElement15 'ThisIsElement15 '.

    26, path of varchar2 (50) ThisIsElement16 'ThisIsElement16 '.

    27, way to varchar2 (50) ThisIsElement17 'ThisIsElement17 '.

    28 road of varchar2 (50) ThisIsElement18 'ThisIsElement18 '.

    29 road of varchar2 (50) ThisIsElement19 'ThisIsElement19 '.

    30, path of varchar2 (50) ThisIsElement20 'ThisIsElement20 '.

    31, path of varchar2 (50) ThisIsElement21 'ThisIsElement21 '.

    32 road of varchar2 (50) ThisIsElement22 'ThisIsElement22 '.

    33, path of varchar2 (50) ThisIsElement23 'ThisIsElement23 '.

    34 road of varchar2 (50) ThisIsElement24 'ThisIsElement24 '.

    35 road of varchar2 (50) ThisIsElement25 'ThisIsElement25 '.

    36, road to varchar2 (50) ThisIsElement26 'ThisIsElement26 '.

    37, path of varchar2 (50) ThisIsElement27 'ThisIsElement27 '.

    38, path of varchar2 (50) ThisIsElement28 'ThisIsElement28 '.

    39, path of varchar2 (50) ThisIsElement29 'ThisIsElement29 '.

    40, road of varchar2 (50) ThisIsElement30 'ThisIsElement30 '.

    41 road of varchar2 (50) ThisIsElement31 'ThisIsElement31 '.

    42, path of varchar2 (50) ThisIsElement32 'ThisIsElement32 '.

    43, road to varchar2 (50) ThisIsElement33 'ThisIsElement33 '.

    44, path of varchar2 (50) ThisIsElement34 'ThisIsElement34 '.

    45, path of varchar2 (50) ThisIsElement35 'ThisIsElement35 '.

    46, path of varchar2 (50) ThisIsElement36 'ThisIsElement36 '.

    47, path of varchar2 (50) ThisIsElement37 'ThisIsElement37 '.

    48, path of varchar2 (50) ThisIsElement38 'ThisIsElement38 '.

    49, path of varchar2 (50) ThisIsElement39 'ThisIsElement39 '.

    50 road of varchar2 (50) ThisIsElement40 'ThisIsElement40 '.

    51, path of varchar2 (50) ThisIsElement41 'ThisIsElement41 '.

    52, path of varchar2 (50) ThisIsElement42 'ThisIsElement42 '.

    53, path of varchar2 (50) ThisIsElement43 'ThisIsElement43 '.

    54, path of varchar2 (50) ThisIsElement44 'ThisIsElement44 '.

    55 road of varchar2 (50) ThisIsElement45 'ThisIsElement45 '.

    56, path of varchar2 (50) ThisIsElement46 'ThisIsElement46 '.

    57, path of varchar2 (50) ThisIsElement47 'ThisIsElement47 '.

    58 road of varchar2 (50) ThisIsElement48 'ThisIsElement48 '.

    59 road of varchar2 (50) ThisIsElement49 'ThisIsElement49 '.

    60 road of varchar2 (50) ThisIsElement50 'ThisIsElement50 '.

    (61) x;

    Materialized view created.

    The discount is then performed in two steps:

    1. INSERT INTO master_table_xml
    2. Refresh the MVIEW

    (Note: as we insert in an XMLType column, we need an XML (only root) document this time)

    SQL > set timing on

    SQL >

    SQL > truncate table master_table_xml;

    Table truncated.

    Elapsed time: 00:00:00.27

    SQL >

    SQL > insert into master_table_xml

    2. select id

    3, name

    4, xmlparse (document '' |) XmlContent |'')

    5 master_table;

    20000 rows created.

    Elapsed time: 00:04:38.72

    SQL >

    SQL > call dbms_mview.refresh ('MASTER_TABLE_MV');

    Calls made.

    Elapsed time: 00:00:22.42

    SQL >

    SQL > select count (*) in the master_table_mv;

    COUNT (*)

    ----------

    20000

    Elapsed time: 00:00:01.38

    SQL > truncate table master_table_xml;

    Table truncated.

    Elapsed time: 00:00:00.41

  • subqueries in the select list

    Why is using subqueries in the list of SELECTION so slow? Oracle would not optimize by changing the hash-join-semi. Is there a rule for this? Is there any list of optimizations that seem simple but Oracle is unable to carry them out? My version of Oracle's 11g Enterprise Edition Release 11.2.0.1.0

    Here is an example:

    test the table, select the level of the double connect by level to create < = 1000 * 1000;

    Call dbms_stats.gather_table_stats (user, 'TEST');

    Quick version - find duplicates of I -.

    Select * from test t1 where exists (select 1 test t2 where t2.i = t1.rowid <>t1.i and t2.rowid); --0 s

    -other methods are slow because for each record of the table overall TEST TEST is analyzed:

    Select * from (select case when exists (select 1 test t2 where t2.i = t1.rowid <>t1.i and t2.rowid) then end 'identifier' err test t1) where err is not null;

    Select * from (select case when exists (select 1 test t2 where t2.i = t1.rowid <>t1.i and t2.rowid) then end 'identifier' err test t1) where err is not null;

    select max (case when exists (select 1 ))

    test t2

    where t2.i = t1.i and

    T2. ROWID <>t1.rowid) then

    'ID '.

    end) err

    test t1;

    select max (case when t1.rowid not in (select min (t2.rowid) ))

    test t2

    Group by i) then

    'ID '.

    end) err

    test t1

    Here is an explanation:

    Ask Tom & quot; disadvantages of queries with many & quot; Select... & quot;

  • enabled automatic optimizer stats collection, but is not running and not appearing not

    enabled automatic optimizer stats collection, but is not running and not appearing not

    I've activated automatic optimizer stats collection days ago, but it never worked. DB version is 11 GR 2, OS redhat 5. It shows active in dba_autotask_client, but not in autotask_task. Please help on this issue.

    SQL > select client_name, State of dba_autotask_client;

    CLIENT_NAME STATUS
    --------
    ACTIVE automatic optimizer stats collection
    advise OFF auto space
    SQL tuning advisor ENABLED

    SQL > select TaskName, status, to_char (last_good_date, ' ' YYYY-MM-DD HH24:MI:SS) last_good_date, last_good_duration
    of dba_autotask_task
    where client_name = "auto optimizer stats collection"; 2 3

    no selected line

    SQL > select program_action, number_of_arguments, active
    of dba_scheduler_programs
    where owner = 'SYS '.
    and program_name = 'GATHER_STATS_PROG '; 2 3 4

    PROGRAM_ACTION NUMBER_OF_ARGUMENTS ENABL

    DBMS_STATS.gather_database_stats_job_proc 0 TRUE

    SQL > select w.window_name, c.autotask_status, c.optimizer_stats, w.repeat_interval, w.enabled
    -, w.duration, w.last_start_date, w.next_start_date
    dba_autotask_window_clients 2 c, dba_scheduler_windows w
    3 where 4 c.window_name = w.window_name
    5 order by last_start_date desc;

    WINDOW_NAME AUTOTASK OPTIMIZE ITS REPEAT_INTERVAL

    MONDAY_WINDOW ON ON freq = all days; byday = MY. byhour = 22; byminute = 0; BYSECOND = 0 TRUE
    SUNDAY_WINDOW ON ON freq = all days; byday = Sun; byhour = 6; byminute = 0; BYSECOND = 0 TRUE
    SATURDAY_WINDOW ON ON freq = all days; byday = SAT; byhour = 6; byminute = 0; BYSECOND = 0 TRUE
    FRIDAY_WINDOW ON ON freq = all days; byday = FRI; byhour = 22; byminute = 0; BYSECOND = 0 TRUE
    THURSDAY_WINDOW ON ON freq = all days; byday = THU; byhour = 22; byminute = 0; BYSECOND = 0 TRUE
    WEDNESDAY_WINDOW ON ON freq = all days; byday = SEA; byhour = 22; byminute = 0; BYSECOND = 0 TRUE
    TUESDAY_WINDOW ON ON freq = all days; byday = KILL; byhour = 22; byminute = 0; BYSECOND = 0 TRUE

    7 selected lines.

    SQL >

    Select max (last_analyzed) from all_tables;

    display SQL results above

  • Job GATHER_STATS collect statistics for the tables 'static '.

    Oracle version: 10 gr 2

    If a corporate table has not changed (No. DML) in the last 10 days, will be the collection of default oracle job stats
    DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC
    yet collect statistics in this table?

    The answer is no, unless you have changed the default optimizer stats collection of statistics because approximately 10% of the data must have undergone change before that table is elgible for new statistics.

    See the next topic in the Performance and Tuning section 14.2.1 GATHER_STATS_JOB Manual:

    http://download.Oracle.com/docs/CD/B19306_01/server.102/b14211/stats.htm#sthref1068

    HTH - Mark D Powell.

  • SQL * more missing parenthesis

    Hi, experts,

    I use sqlplus to run this command.


    CALL dbms_stats.gather_table_stats (ownname = > 'XXXX', tabname = > 'XXX_TBL', estimate_percent = > dbms_stats.auto_sample_size, cascade = > TRUE, level = > dbms_stats.default_degree, method_opt = > 'FOR ALL COLUMNS SIZE SKEWONLY')

    sql * more version is 9.2.0.1.0
    the database version is 9.2.0.7.0

    My syntax is correct?
    but he returns this error.



    ERROR on line 1:
    ORA-00907: lack of right parenthesis


    unauthorized in sql syntax * more?

    Hello

    That works fine for me (in 10.1.0.2.0, Oracle with SQL * Plus 10.1.0.2.0).

    Is the command on one line? Make sure that it is.

    Is the command that you have posted to the SQL > invite, or part of a PL/SQL code? (Don't say CALL within PL/SQL).

    Can you call the procedure with fewer arguments, say, just the owner and table name? If so, add that the other arguments to return, as a result of the time. What argument leads to failure?

  • Inaccurate statistics calculated by the task of automatic statistics collection

    Hello

    I have a table with about 300,000 sets and I manually collected statistics for the table and its index with the command: "EXEC DBMS_STATS.gather_table_stats ('SCOTT', 'Table_test');" Both indices of the table statistics were accurate and the execution of SQL statements were also very powerful. Then, during the night Enterprise Manager recalculated statistics again, but the results were far from exact:

    OWNER: SCOTT
    INDEX_NAME: I_TEST_INDEX
    INDEX_TYPE: NORMAL
    TABLE_OWNER: SCOTT
    TABLE_NAME: TEST_TABLE
    TABLE_TYPE: TABLE
    UNIQUENESS: UNIQUE
    COMPRESSION: DISABLED
    PREFIX_LENGTH:
    NOM_TABLESPACE: USERS
    INI_TRANS: 2
    MAX_TRANS: 255
    INITIAL_EXTENT: 65536
    NEXT_EXTENT:
    MIN_EXTENTS: 1
    MAX_EXTENTS: 2147483645
    PCT_INCREASE:
    PCT_THRESHOLD:
    INCLUDE_COLUMN:
    FREELISTS:
    FREELIST_GROUPS:
    PCT_FREE: 10
    LOGGING: YES
    BLEVEL: 2
    LEAF_BLOCKS: 16
    DISTINCT_KEYS: 141
    AVG_LEAF_BLOCKS_PER_KEY: 1
    AVG_DATA_BLOCKS_PER_KEY: 1
    CLUSTERING_FACTOR: 19
    STATUS: VALID
    NUM_ROWS: 141
    SAMPLE_SIZE: 141
    LAST_ANALYZED: November 4, 2008 22:03
    DEGREE: 1
    INSTANCES: 1
    PARTITIONED: NO.
    TEMPORARY: N
    GENERATED: N
    SCHOOL: N
    USER_TABLES: DEFAULT
    USER_STATS: NO.
    DURATION:
    PCT_DIRECT_ACCESS: 100
    ITYP_OWNER:
    ITYP_NAME:
    PARAMETERS:
    GLOBAL_STATS: YES
    DOMIDX_STATUS:
    DOMIDX_OPSTATUS:
    FUNCIDX_STATUS:
    JOIN_INDEX: NO.
    IOT_REDUNDANT_PKEY_ELIM: NO.
    ELIMINATED: NO.


    The new statistics report that there are only 141 separate in the table key although there are actually 300 000. The result is that SQL statements executing degrades considerably. Does anyone know why the statistics have become so inaccurate? I'm on 10.2.0.1.0.

    Kind regards
    Swear

    user633661 wrote:
    The only thing I'm not sure of is if I should include the statistics calculation step in the batch (this is a PL/SQL procedure) - I mean, it's a mixture of normal practices code that calculates statistics with business logic code?

    Swear,

    I think Justin has responded to this already pretty well, it is quite common in environments of warehouses of data or any other leads batch environments. You must be precise (or let's say representative) statistics if you want the CBO to do a good job.

    BTW, do you know if the collection of statistics on an Index organized table is somewhat different that gather on heap-organized tables?

    No, there is no difference. DBMS_STATS (and even surprisingly ANALYZE) know IOT and collect 'table', column and potential segments of infinity and the (mandatory) primary key index (representing the IOT) even if you do not specify a "cascade-online true" in the call DBMS_STATS.

    Kind regards
    Randolf

    Oracle related blog stuff:
    http://Oracle-Randolf.blogspot.com/

    SQLTools ++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676 /.
    http://sourceforge.NET/projects/SQLT-pp/

  • Run DBMS_STATS. SET_TABLE_STATS in the package

    Hello

    I tried to put the table of statistics through DBMS_STATS. SET_TABLE_STATS for the table, I'm not owner.

    When I use the command SQL * Plus, it works fine.

    When I use the same command in the package, I got an error:

    ERROR on line 1:

    ORA-20000: TABLE 'XXX '. "' YYYYY ' does not exist or insufficient privileges

    ORA-06512: at "SYS." DBMS_STATS", line 4455

    ORA-06512: at "SYS." DBMS_STATS", line 10128

    ORA-06512: at "TKPA. KOPA_ORP_RELACIJE', line 13

    ORA-06512: at line 2

    User a role DBA and ALTER, DELETE, INDEX, INSERT and SELECT, UPDATE, REFERENCES, ON COMMIT REFRESH, QUERY REWRITE, DEBUG, FLASHBACK on table.

    Y at - it an idea?

    Thank you and best regards.

    Maybe

    Read memos to use SET_TABLE_STATS in http://docs.oracle.com/cd/E11882_01/appdev.112/e40758/d_stats.htm#ARPLS059

    To call this procedure, you must be the table owner, or you need the ANALYZE ANY privilege. For the objects belonged to SYS , you must be the owner of the table, or you must ANALYZE ANY DICTIONARY privilege or the SYSDBA privilege.

    Concerning

    Etbin

    http://docs.Oracle.com/CD/E11882_01/AppDev.112/e25519/subprograms.htm#LNPLS00809 can be useful also

  • Execute "dbms_stats.gather_table_stats in the start = &gt; run until the peak hours = &gt; pause = &gt; restart from where he was arrested after the rush hour" mode.

    Guys,

    Please let know us if it is possible to run dbms_stats.gather_table_stats in "start = > Execute until off-peak hours = > take a break during peak times = > restart from where he was arrested after the rush hour" mode.

    I partitioned table with mammoth number of records requiring full table statistics collection once all 3 months. Stats gathering is so expensive that it takes almost 2 days to complete.  Our goal is to ensure that no SQL that runs during peak hours (15:00 to 22:00 hours) is affected by the collection of statistics. That's why we would like the stats collection to run above mode.

    Ideas, suggestions would be very appreciated and I will receive with the profound esteem.

    -Bugs

    Like others, I wonder why full stats are required.

    You nightly stats together people with reduced mobility?

    Check dba_tab_modifications for the changes made to the table, your stats every night, whether its stats are needed comes from there so if a partition has no 10% change, why you re - collect his stats for her?  There are a few other things going on behind the scenes, but close enough.  Even if you set additional stats there and kick that off, he'll simply choose the following who did not, risking several work trying to do the same thing

    theres probably 10 different ways to act how you're good with plql but generate your work statistics with something like that, then open multiple windows of sqlplus and divide it into sections while you are in the collection of statistics on several pieces at the same time.    Failure could write some autonomous plsql proc that analyzes a partition that went in there and call that any amount of time passing in the partition in a loop, but you get the idea of gathering statistics on more than one partition at the same time.

    Change the value of table_owner and table_name

    exec DBMS_STATS. FLUSH_DATABASE_MONITORING_INFO;

    Select the command from

    (select (nvl(a.updates,0) + nvl(a.inserts,0) + nvl(a.deletes,0)) * 100)

    / nvl(b.num_rows,1) as change.

    "exec DBMS_STATS. GATHER_TABLE_STATS (' |) '''' || "TABLE_OWNER" | '''' || ',' || '''' || 'TABLE_NAME ' | '''' || ', granularity =>' | '''' || 'PARTITION | ''''||  ', PARTNAME =>' | '''' ||  a.PARTITION_NAME | '''' || ', ESTIMATE_PERCENT => DBMS_STATS. AUTO_SAMPLE_SIZE, METHOD_OPT => ' | '''' || 'FOR ALL COLUMNS SIZE AUTO ' | '''' || ')' as a command

    of dba_tab_modifications a, b dba_tab_partitions

    where a.table_name as "TABLE PARTITIONNEE YOUR BIG".

    and a.table_name = b.table_name

    and a.partition_name = b.partition_name

    and nvl(a.updates,0) + nvl(a.inserts,0) + nvl(a.deletes,0) > 0

    )

    where to change > = 10

  • Call the DBMS package via the link of database

    IM changes on tables using dba_tab_modifications, number of databases, 1 metric scheme in 1 case.  to get the latest changes, I have to empty the changes which means I have to exec dbms_stats package remotely.  I can do by creating a proc and calling the proc over a database link.

    Remote

    create or replace procedure flush_db_info

    Start

    DBMS_STATS. FLUSH_DATABASE_MONITORING_INFO;

    end;

    local

    create synonym remote_flush for flush_db_info@remote

    exec remote_flush;

    Is there anyway that I can run the DBMS_STATS. FLUSH_DATABASE_MONITORING_INFO@remote on the local to run on the remote control, without creating the proc/synonym?

    This is no biggy, I got it working like this, I just want to see is possible

    Have you tried?

  • DBMS_STATS. SET_TABLE_PREFS does not work as expected

    Hello!

    I have a rather strange behavior of the DBMS_STATS. SET__PREFS proceduree.

    I want to realize that statistics on the table are collected with degrre = 4 in parallel, and I want to achieve this by defining the table preferences and not explicitly mention the degree in the call to the collection of statistics on the table.

    That's what I did:

    -first of all check the global preferences:

    Select dbms_stats.get_prefs ('DEGREE', 'GLOBAL') of double;
    ======================================
    NULL VALUE

    Then get the table preferences:

    Select
    DBMS_STATS.get_prefs ("DEGREE", "DARL', 'EIGNUNGSEINZELERGEBNISSE'")
    Double;
    ================================================
    NULL VALUE

    Then I try to define the DEGREE of 4:

    Start
    DBMS_STATS.set_table_prefs ('DARL', 'EIGNUNGSEINZELERGEBNISSE', 'LEVEL', '4');
    end;

    and after that I question the preferences:

    Select
    DBMS_STATS.get_prefs ("DEGREE", "DARL', 'EIGNUNGSEINZELERGEBNISSE'")
    Double;
    ====================================================
    4

    But when I run

    Start
    DBMS_STATS.gather_table_stats ('DARL', 'EIGNUNGSEINZELERGEBNISSE');
    end;

    Statistics is not picked up at the same time...

    I just need to explicitly mention degree - and it runs in parallel:

    Start
    DBMS_STATS.gather_table_stats ('DARL', 'EIGNUNGSEINZELERGEBNISSE' = > '4');
    end;


    Now - I checked just the prosesses each order product... in the last example you could clearly see the process parallels 4 th.

    To be honest, I didn't ' t now in what display catalog could I look up if the statistics resulting, collecting the operation takes place in parallel or not.


    And the table is partitioned and 5.4 G... It takes time to collect statistics, with all the other settings as they are.

    What is happening here... I forgot something. Is there another setting that should be set for the preferences to function properly?

    Database version is 11.2

    Published by: Robert on November 26, 2012 14:25

    Published by: Robert on November 27, 2012 08:18

    I just checked and got the same result in the 11.2 database.

    At first, I noticed the following in the description of the procedure:

    ... DBMS_STATS can use serial execution if the size of the object does not guarantee parallel execution.

    I checked on the tables of 4G and 250G with the same result.

    The second thing I noticed was how default degree is calculated in the procedure:

    ...
      degree           NUMBER   DEFAULT to_degree_type(get_param('DEGREE')),
    ...
    

    Please note the use of DBMS_STATS currently obsolete. GET_PARAM procedure which is replaced by GET_PREFS. The first steps pick up preferences defined at the table level. If you set the global preferences using DBMS_STATS. SET_GLOBAL_PREFS it would be used as the default value (I checked).

    Cause depending on your situation you may find this unacceptable solution. I have this case you always explicitly specify the degree as you did before.

    Looks like a bug to me, but cause I have no confirmation of that yet.

    It will be useful.

Maybe you are looking for

  • New unauthorized synchronization account, says my email already in use.

    Tried to set up a new sync account, entered my email address, said that it is already in use, I cannot create an account.

  • HP probook 450 g3: reader of fingerprint hp probook 450 g3 does not

    Hi everyone, hope you are well... last week, I bought new computer laptop hp probook 450 g3... but I m facing problem form day 1... The problem is my lappie fingerrpint reader/sensor does not. I update all the drivers from the hp site, but still does

  • HP laptop Pavilion 17-e040us - Possible fan problem

    Hi all.  I think I have a fan problem.  The fan seems to run all the time... at least for the past 3 or raining days, but not fort, with air alternating hot and cold, of the events.  I looked at the events surrounding the outside of the computer and

  • Sampling dual channel

    Currently I use an NI PXI-5122 card in a chassis PXI-1000 b and I have just set up the experience of taking data from both channels.  When I slowed down the pulse frequency, I realized that the data is not a given in both channels simultaneously.  Is

  • convert LinMot 8.0 to 7.1 program example

    Dear all, I would like to ask a favor very frequent in this newsgroup - we had a sample program for our linear motor of the manufacturer (LinMot, http://www.linmot.com) - Unfortunately this sample program is in LabView 8 and I have only 7.1 at hand.