Module FNDGSCST: collect statistics of schema

I had a problem similar to this, and I remember drew to a close with name change or drop the object in question.

Surely, there MUST be a way for that:

1. concurrent demand Gather schema Stats can still function (with the object with this name, always on the database)

2 preferably also the contractors that object as well (so, not as a 'fix' have this object being excluded a request competitor buggy)

It's on a 12.1.3 environment.

I thought that the cause of this problem is someone with exotic filenames on objects additional eBS, without thinking about the consequences. I have no idea about the source of this particular object (I guess there is more than 1).

Is there a definitive solution to this problem? Concurrent demand to collect Stats of scheme has had a very bad reputation, not only thanks to this bug. Problem is that it stops on ALL issues, which is another bug.

* Starts * April 2, 2014 15:05:58

Error ORACLE 20001 in FDPSTP

Cause: FDPSTP failed due to the ORA-20001: SYS_NT4IBAQZIU2MRGQKWKXDGZYG == is an invalid identifier

ORA-06512: at the 'APPS '. FND_STATS', line 774

ORA-06512: at line 1

.

The SQL statement being executed at the time of the error was: SE

+---------------------------------------------------------------------------+

Beginning of the FND_FILE log messages

+---------------------------------------------------------------------------+

In GATHER_SCHEMA_STATS, schema_name = per cent = 40 degree = 1 internal_flag = NOBACKUP

ORA-20001: SYS_NT4IBAQZIU2MRGQKWKXDGZYG == is an invalid identifier

+---------------------------------------------------------------------------+

End of the FND_FILE log messages

+---------------------------------------------------------------------------+

See if he can help MOS Doc 1363044.1

Tags: Oracle Applications

Similar Questions

  • Unique constraint during the race error collecting statistics of schema

    Hello
    I get this error periodically during execution to collect statistics of schema
    In GATHER_SCHEMA_STATS , schema_name= ALL percent= 40 degree = 8 internal_flag= NOBACKUP
    Unable to correctly update the history table - fnd_stats_hist.
    -1 - ORA-00001: unique constraint (APPLSYS.FND_STATS_HIST_U1) violated
    Unable to correctly update the history table - fnd_stats_hist.
    -1 - ORA-00001: unique constraint (APPLSYS.FND_STATS_HIST_U1) violated
    Unable to correctly update the history table - fnd_stats_hist.
    -1 - ORA-00001: unique constraint (APPLSYS.FND_STATS_HIST_U1) violated
    Unable to correctly update the history table - fnd_stats_hist.
    -1 - ORA-00001: unique constraint (APPLSYS.FND_STATS_HIST_U1) violated
    Unable to correctly update the history table - fnd_stats_hist.
    -1 - ORA-00001: unique constraint (APPLSYS.FND_STATS_HIST_U1) violated
    Unable to correctly update the history table - fnd_stats_hist.
    -1 - ORA-00001: unique constraint (APPLSYS.FND_STATS_HIST_U1) violated
    Unable to correctly update the history table - fnd_stats_hist.
    -1 - ORA-00001: unique constraint (APPLSYS.FND_STATS_HIST_U1) violated
    Unable to correctly update the history table - fnd_stats_hist.
    -1 - ORA-00001: unique constraint (APPLSYS.FND_STATS_HIST_U1) violated
    Unable to correctly update the history table - fnd_stats_hist.
    -1 - ORA-00001: unique constraint (APPLSYS.FND_STATS_HIST_U1) violated
    Unable to correctly update the history table - fnd_stats_hist.
    -1 - ORA-00001: unique constraint (APPLSYS.FND_STATS_HIST_U1) violated
    Unable to correctly update the history table - fnd_stats_hist.
    -1 - ORA-00001: unique constraint (APPLSYS.FND_STATS_HIST_U1) violated
    I went thru notes 470556.1 , but I have not applied patch 5876047 according to the note.

    I can truncate table FND_STATS_HIST? Is it safe to truncate this table?

    Thank you
    ARS

    Published by: user7640966 on January 2, 2011 23:58

    Hi Ars;

    If I suggest his prod, make sure you have a valid backup of your system and also raise sr and confirm with oracle support. If you are on the clone or test, such as mention of doc first to backup table FND_STATS_HIST (make sure that you have a valid backup), truncate and repeat the test question

    Respect of
    HELIOS

  • Collect statistics of schema is no longer can be cancelled but Phase set EXECUTION

    Hi all.

    After a work of statistics gather scheme was started inadvertently, I canceled it. Now we can see that the parent process is blocked in:

    Phase = running

    Status = in pause.

    If we try to cancel, we see "request * can no longer be cancelled.» The concurrent Manager process that running this application has ended abnormally. The ICM will mark this request as completed with error.

    Do we need to worry about this? It is not really 'ongoing'. It is expected to run again next weekend.

    Is it safe to leave it? Or do I take action? I don't think anything more needs to be done, but others disagree.

    I thank.

    11.5.10.2

    11.2.0.2

    RHEL 5.5

    Hello

    1.) it not automatically runs through Planner if you cancel it this time, it must be planned again.

    (2.) to check if it still has the substantive session the session of $ v

    Select oracle_process_id from the fnd_concurrent_requests where request_id = 'collect the id of the request;

    Select * from session $ v where paddr = (select addr from process $ v where the spid = 'oracle_process_id_obtained_above');

    kill sessions for the sid above - this will give zero if its not actually running.

    3.), you can manually update the completed fnd_concurrent_requests table

    SQL > update fnd_concurrent_requests set phase_code = 'C', status_code = has ' where request_id = - this should update only 1 file for the schema request gather;

    -SQL > commit;  (Remember to commit)

    (4.) to ensure that all managers are on the rise (actual = target) and other concurrent requests complete with phase and code status as C and C code.

    concerning

    KK

  • Collect statistics of schema

    Hi all

    11.1.0.6

    We met the performance issue on some lots.
    Can it be attributed to a statistic of outdated diagram? We have collected statistics schema 1 months ago.
    How often should collect us statistics of schema?


    Thank you very much
    Kinz

    Well, I guess the query was degraded. Do you mean that it was always like that? In this case, you must take a very different approach: you can set from the beginning.

    First get the AWR SQL report. I already told you to do. Then check out some more detail. Run the query with this suspicion,
    / * + gather_plan_statistics * /.

    And then to get the execution statistics with
    Select * from table (dbms_xplan.display_cursor (format-online 'allstats')) last;

    Yes, it will take an hour or two to run, but it's the best way to get to the truth.

  • Collect statistics of scheme run

    Hello
    in eBS on Unix AIX, we experience a performance issue.
    In our research, we came across this note: MOS ID 744143.1
    who says that gathering should not be run on every day.
    The cause of the delay may be due to running it every night as is our case?
    Run us it everyday at night for all schemas
    Please clarify this!

    Hello

    This simultaneous program on weekly or monthly basis, depending on the load/change of data you have in your instance - see Appendix (Note: 168136.1 - how many times should gather schema statistics program run?) for more details.

    Kind regards
    Hussein

  • Collect statistics of concurrent request schema...

    Hello

    Collect statistics of schema ('ALL') taken same duration long concurrent request not completed after 8 hours, too help me in this regard the collection of statistics for all schemas in Oracle Ebs environment.

    Application: Oracle Ebs R12 (12.1.2)
    DB: 11.1.0.7
    OS: Linux5 * 86 X 64

    with impatience.

    Kind regards
    Mohsin

    Hi Hussein

    Thanks for the reply. Actully there was some statistics locked to other tables, so problem is solved after runing commands below.

    Select distinct owner, table_name, stattype_locked
    of dba_tab_statistics, where stattype_locked is not null;

    exec ('SYS') dbms_stats.unlock_schema_stats

    exec dbms_stats.unlock_table_stats('SYS','WRH$_SYSSTAT');

    Kind regards
    Mohsin

  • How to improve the performance of the scheme to collect statistics CP?

    Hello

    However, we have developed a few modules like Oracle Financials, SCM, OPM, HR, but 'Schema statistics gather' CP is running for 10:30 hours. The parameters are the following:
    Schema name: ALL
    Estimate for %: 10
    Degree: Null
    Flag of backup: NOBACKUP
    ID of the request to reboot: Null
    Story mode: LASTRUN
    Collect Options: COLLECT
    Line changes: Null
    Invalidating dependent users: Y

    How can I set this program to better performance? How can I include only the modules that are being implemented not all (patterns)?

    Concerning
    Ariz

    How can I set this program to better performance? How can I include only the modules that are being implemented not all (patterns)?

    There is no precise answer to your question and you will need to test the different settings until you are satisfied with the performance - definition of the parameters used to collect statistics of the schema [556466.1 ID] program

    Try with "Estimate percent" between 10% and 40%

    Thank you
    Hussein

  • How to collect statistics on the table for tables in a different schema

    Hi all

    I have a table in a schema, and I want to collect statistics for the table in a different schema.

    I gave GRANT ALL ON SCHEMA1. T1 TO SCHEMA2;

    And when I tried to run the command to collect statistics to help

    DBMS_STATS. GATHER_TABLE_STATS (OWNNAME = > 'SCHMEA1', TABNAME = > 'T1');

    The function will fail.

    Is there a way we can collect statistics of the table for tables in a schema into another schema.

    Thank you
    MK.

    You must grant analyze to schema2.

    SY.

  • Is there some collect Stats of schema in the menu "adadmin?

    Hi all

    EBS R12.2

    RHEL 6.6

    In the EBS R12.2, where can I run a simultaneous Manager who "analyze statistical computing" for the optimizer base cost of work?

    I only do it on the back-end database side help > exec dbms_stats.gather_schema_stats ('APPS');

    But there are many schema in EBS, how can I run only at the database level?

    Is there a script pre-built for collecting statistics of entire database? / rdbms/admin folder?

    Is there a module to collect all the statistics schema in ADADMIN?

    Thank you very much

    MK

    The supported method of collecting statistics are:

    -Run the simultaneous program "Gathering statistics for schema" (pass 'ALL' for all schemas).

    -Use fnd_stats.gather_schema_statistics

    EBPERF FAQ - collection of statistics in Oracle EBS 11i and R12 (Doc ID 368252.1)--8) what are the supported methods to collect statistics for the EBS schemas?

    Dbms_stats.gather_schema_stats using is not supported.

    Thank you

    Hussein

  • AUTHID CURRENT_USER collect statistics

    Hi all

    I use Oracle Database 11 g Enterprise Edition Release 11.2.0.3.0 - 64 bit Production. I created a procedure that collects statistics from table using DBMS_STATS using AUTHID CURRENT_USER to SChema1.

    CREATE OR REPLACE PROCEDURE SCHEMA1.getstats ( p_schema IN VARCHAR2,
                                p_table  IN VARCHAR2)
    AUTHID CURRENT_USER
    AS
    BEGIN
            dbms_stats.gather_table_stats(ownname          => p_schema,
                      tabname          => p_table,
                      method_opt       => 'FOR ALL COLUMNS SIZE AUTO',
                      degree           => 8,
                      force            => true,
                      no_invalidate    => false);
    END;
    

    I'm trying to use the same procedure to collect statistics for a table in another schema. However, I get the error ORA-20000: impossible to analyze the TABLE ' ' SCHEMA2.» DETAIL_VALUE', insufficient privileges or does not exist

    EXEC getstats ( 'SCHEMA2','DETAIL_VALUE');
    

    I grant privileges on the table to SCHEMA1. But still get the error. I understand AUTHID CURRENT_USER will attempt to use as long as the procedure as own drawing, but still, I get the error. Can someone help me on this?

    GRANT ALL ON DETAIL_VALUE TO SCHEMA1
    

    It has nothing to do with authid user current and everything to do with the privileges. Grant on the table is not enough for you to analyze.

    SQL> conn u1/u1
    Connected.
    SQL> create table foo(bar number);                                               
    
    Table created.                                                                   
    
    SQL> grant all on foo to u2;                                                     
    
    Grant succeeded.                                                                 
    
    SQL> conn u2/u2
    Connected.
    SQL> exec dbms_stats.gather_table_stats('U1', 'FOO');
    BEGIN dbms_stats.gather_table_stats('U1', 'FOO'); END;                           
    
    *
    ERROR at line 1:
    ORA-20000: Unable to analyze TABLE "U1"."FOO", insufficient privileges or does
    not exist
    ORA-06512: at "SYS.DBMS_STATS", line 33859
    ORA-06512: at line 1                                                             
    
    SQL> conn / as sysdba
    Connected.                                                                       
    
    SQL> grant analyze any to u2;                                                    
    
    Grant succeeded.                                                                 
    
    SQL> conn u2/u2
    Connected.
    SQL> exec dbms_stats.gather_table_stats('U1', 'FOO');                            
    
    PL/SQL procedure successfully completed.
    
  • Collect statistics in oracle 11 g

    Hi all

    OPERATING SYSTEM: AIX
    DB: 11G

    I am trying to collect statistics for a specific schema using the mentioned below command:

    > exec dbms_stats.gather_schema_stats ('schema_name');

    The schema of such size is 140 GB, and it's almost an hour that the above command is still running.
    I need to know if it is normal, is there a way to set the control above to the acceleration of reunification.

    And how can we check if the above command runs successfully from the background, I mean if there is anyway or something like that?


    Kind regards
    Sphinx

    Hello

    The runtime for a 140 GB schema seems quite OK.

    Without any other parameters, the work of gather_stats will use the 'compute' option, which takes a long calculation for each table.

    I've written a procedure that checks for 'fade' in dba_tab_statistics tables and calculates these statistics only.

    The request for these tables up looks like:

    SELECT table_name, 'NONE' AS nom_partition

    OF dba_tab_statistics one

    Owner WHERE = i_schema

    AND NVL (stale_stats, 'NULL') = 'YES '.

    AND NVL (stattype_locked, 'NULL') = 'NULL '.

    AND NOT EXISTS

    (SELECT 1

    FROM dba_tab_partitions b

    WHERE table_owner = i_schema

    AND a.table_name = b.table_name)

    UNION ALL

    SELECT table_name, nom_partition

    OF dba_tab_statistics one

    Owner WHERE = i_schema

    AND NVL (stale_stats, 'NULL') = 'YES '.

    AND NVL (stattype_locked, 'NULL') = 'NULL '.

    AND nom_partition IS NOT NULL;

    The preceding section is smooth tables, the lower part is flawed partitions for partitioned tables.

    It allows the procedure to collect statistics for the tables and one or more partitions.

    It ignores the tables that were calculated before the work of the default oracle.

    See you soon,.

    FJFranken

  • Collects statistics for tables using jobs in Oracle 11 G 11.2.0.1.0?

    Hello

    My query is regarding the collection of statistics for tables.

    My Version of Oracle DB is:
    BANNER
    --------------------------------------------------------------------------------
    Oracle Database 11 g Enterprise Edition Release 11.2.0.1.0 - 64 bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE 11.2.0.1.0 Production
    AMT for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production

    In versions of prior oracle db, we used to schedule tasks to run on a daily basis for collecting statistics. Especially for tables that are frequent and huge inserts.

    I read that in 11g stats for all of the schema on a database are automatically make every night jobs. I checked these jobs and I see that they are running on a monthly basis [joined query]. This job is enabled and is scheduled to run monthly.
    This means that my diagram will be analyzed on a monthly basis. My understanding is correct?

    Can I still plan jobs to collect statistics for specific tables on every week? This will diminish the performance?
    We expect 100000 documents to insert on a daily basis.
    SELECT  JOB_NAME,  
       START_DATE, REPEAT_INTERVAL, 
         LAST_START_DATE, 
        NEXT_RUN_DATE,ENABLED
    FROM dba_scheduler_jobs
    WHERE job_name LIKE '%STAT%'
    ORDER BY 1;
    JOB_NAME                       START_DATE                             REPEAT_INTERVAL                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  LAST_START_DATE                        NEXT_RUN_DATE                          ENABLED
    ------------------------------ -------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------- -------------------------------------- -------
    BSLN_MAINTAIN_STATS_JOB        16-AUG-09 12.00.00.000000000 AM -07:00                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  14-APR-13 12.00.00.427370000 AM -07:00 21-APR-13 12.00.00.400000000 AM -07:00 TRUE    
    MGMT_STATS_CONFIG_JOB          15-AUG-09 12.24.04.694342000 AM -07:00 freq=monthly;interval=1;bymonthday=1;byhour=01;byminute=01;bysecond=01                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           01-APR-13 01.01.01.710280000 AM -07:00 01-MAY-13 01.01.01.700000000 AM -07:00 TRUE    
    Thank you
    Somiya

    Your understanding is not correct. These jobs are in dba_autotask_task.
    And they will be run every day.

    HTH

    -------------
    Sybrand Bakker
    Senior Oracle DBA

  • collect statistics error

    Database version is 11g r1 11.1.0.6
    I am currently collecting statistics of the schema I am facing following error

    SQL > exec dbms_stats.gather_SCHEMA_stats (ownname = > 'WMWHSE1', estimate_percent = > dbms_stats.auto_sample_size, method_opt = > 'FOR ALL THE COLUMNS SIZE AUTO', cascade = > TRUE);
    BEGIN dbms_stats.gather_SCHEMA_stats (ownname = > 'WMWHSE1', estimate_percent = > dbms_stats.auto_sample_size, method_opt = > 'FOR ALL THE COLUMNS SIZE AUTO', cascade = > TRUE); END;

    *
    ERROR on line 1:
    ORA-03113: end of file on communication channel
    Process ID: 24820
    Session ID: 1038 serial number: 34509

    First time I successfully gather stats of the schema but the second time I am tring to do, but given the error

    947721 wrote:

    SQL > exec dbms_stats.gather_SCHEMA_stats (ownname-online 'WMWHSE1', estimate_percent-online dbms_stats.auto_sample_size, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
    START dbms_stats.gather_SCHEMA_stats (ownname-online 'WMWHSE1', estimate_percent-online dbms_stats.auto_sample_size, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); END;

    *
    ERROR on line 1:
    ORA-03113: end of file on communication channel
    Process ID: 24820
    Session ID: 1038 serial number: 34509

    You tell not how long the task runs before crashing.
    If this isn't a lot of time, you can activate Event 10046 (using dbms_monitor, perhaps) to level 4 (lie).
    This can tell you exactly what the session is done when it crashes.

    Note: If the default grouping is "gather stale", which would explain why the work may run successfully several times before a critical table is going stale and causes an accident on the next collection.

    Concerning
    Jonathan Lewis

  • Collect statistics of the table in windows

    Hello

    Could you please help me to collect statistics for the tables below in Windows. I tried collection using dbms_scheduler job for a table, but it is not the collection of statistics for tables.

    TABLE_NAME OWNERNUM_ROWS BLOCKSSize

    MQRDW DWREFERENCETRACE GO 2891985937 26372904, 17.86

    TRAY PON_SERIAL 563722072 5135734 GB 8,12

    TRAY PON_PSN2358851732009064 2.67 GB

    TRAY PON_BOM_LOG 37199475 212936 601 MB

    TRAY UNIQUE_ITEM_LOTS 6633907 79710 160 MB

    TRAY PON_BOM5921377 41717328 MB

    SQL > start

    (2 dbms_scheduler.create_job)

    job_name 3 = > "SCOTT_JOB_SCHEDULE"

    job_type 4 = > "EXECUTABLE."

    5 job_action = > ' begin dbms_stats.gather_table_stats (ownname = > "TRACE3")

    6 tabname = > 'PON_PSN ',.

    estimate_percent 7 = > 30,

    Cascade 8 = > true,

    method_opt 9 = > 'for all THE COLUMNS of SIZE 1.

    10 degrees = > 5,

    11 options = > "BRINGING OUTDATED"); end;',

    12 repeat_interval = > ' freq = daily; byhour = 04; byminute = 0; bysecond = 0; »,

    13 active = > TRUE,

    14 comments = > ' custom stats collection for engine risk ");

    15 end;

    17 N

    PL/SQL procedure successfully completed .


    Total number of CPU available on this server is 8

    Oracle - 10.2.0.4.0

    OS - windows

    Kind regards

    Bala

    52 million lines?

    and how to change every day or be inserted?

    It is possible that you can never bring his stats again

    and everything will probably continue to work as expected

    read the guide to setting performance to understand why you need to collect statistics and when

    https://docs.Oracle.com/CD/E11882_01/server.112/e41573/stats.htm#PFGRF94714

    After reading this guide, you'll be ready for any question, you get all that against this team

    Sometimes you need to say 'no' and why the answer is 'no '.

  • collect statistics for the tablespace

    Friends...

    OS: Linux

    DB: 11 GR 2

    Data size: 1 TB

    I spend monthly multiple partitioned table spaces and bring together in a single annual partition. (for example tbs_2014_01, tbs_2014_02 - tbs_2014_12... all combine them into tbs_2014 as a tablespace)

    Over the weekend, work of database gets executed that collects statistics that are obsolete, it collects all the segments that have been moved from the storage.

    Given that the collection of statistics at the end of the week takes too long, I tried to find a smart way to collect statistics after each tablespace move rather than waiting for job to weekend which will take two or three days to complete.

    1. is there a way to gather statistics at the tablespace level and collect statistics for all objects in this table space?

    2. how to determine the overall stats of collection of statistics part?

    That is, suppose I have move the tbs_2014_01 tablespace and collect statistics with global stats that could take 2 hours but it will be difficult to spend 2 hours for each stats global tablespace which in my opinion is not good and we should be collecting global stats only once.

    3. any other advice?

    977272 wrote:

    @sol.beach... Thanks for your comments...

    I've not been asked to collect statistics to the tablespace but level to collect statistics after that finish objects move in storage.

    Given the size of the data, it is difficult to gather all the statistics at the weekend so trying to understand another method to collect the statistics the weekend load will be less.

    You can collect statistics object on an object by object basis level after that each object has been moved.

Maybe you are looking for