statistics on imports of DP...

Hello

I didn't go on 'user2376553' display (import statistics using datapump so I want to ask this question here.)

I tried to find the answer by searching on google, but found nothing on it.

When you use exp/imp, it creates statistics on the tables when it is the default import. (CALCULATE = is the default)

But what of Data Pump? It calculates the default statistics on import (and update the underlying tables coming DBA_TABLES)?

MOS Doc 793585.1 - EXCLUDE = STATISTICS or EXCLUDE = INDEX_STATISTICS then Datapump import always parses the index

IT IS FAKE!

DataPump always restores the statistics. The statistics that are on the source database are restored on the basis of data target not regenerated. (the tables are NOT be re-analyzed.) The only time where that's not true is before 11.1.0.7 and only in 2 cases. Before 11.1.0.7, the only time that statistics have been regenerated is when there are system generated columns or system generated index. In 2 cases, the statistics are regenerated. (Tables or indexes are discussed again.) All other cases, they are restored.

The behavior of Data Pump before 11.1.0.7 is the same as exp/IMP statistics are normally restored unless
1. There was system generated names (column or index)
2. the statistics specified user = recalculate

Dean

Tags: Database

Similar Questions

  • export schema Rel 2 11G to 10 G REl2 dump

    Hi experts

    get the erorr when importing schema 11.2.0.1.0 to 10.2.0.5.0 dump

    E:\app\oracle\product\11.2.0.1.0\dbhome_1\BIN > system exp/oracle123 OWNER = expotest, mddata, VI001AW COMPATIBLE = Y FILE = E:\DUMPS\EXPO. JOURNAL OF DMP = E:\DUMPS\EXPO. JOURNAL OF STATISTICS = NONE

    Import: Release 10.2.0.5.0 - Production 2 Oct 15:49:17 Mar 2012

    Copyright (c) 1982, 2007, Oracle. All rights reserved.


    Connected to: Oracle Database 10 g Enterprise Edition Release 10.2.0.5.0 - 64 bit Production
    With partitioning, OLAP, Data Mining and Real Application Testing options

    IMP-00010: not a valid export file, header check failed
    IMP-00000: Import terminated unsuccessfully
    d:\oracle\product\10.2.0\db_1\bin >

    Thank you

    Well, you already got your answer, redo the export with Data Pump (expdp) using the VERSION parameter

  • Import does not, inconsistencies

    The issue is updated an old library, iPhoto 08 10.6 to the photo library on 10.11. I ran iPhoto Library upgrade and then installed the latest iPhoto on the App Store. He updated succesfully library and everything seems to work. I then ran all the repair process when starting iPhoto with the combination of keys alt - cmd after I fell on this issues, but the repair did not help.

    Problems with the Photos is: when the selection of the iPhoto library, it starts the import, but stops at 14% and brings an error that there are inconsistencies in the library. The only option is to quit smoking, and the same thing happens the next time.

    Migration Library utility photo looks down for a reason, here are the logs:

    9 February 14:14:45 Photos iMac [582]: attempt to lock pid for: file:///Users/[username]/Pictures/iPhoto%20Library/, , 1, attempts: 3

    9 February 14:14:45 iMac syslogd [45]: sender ASL statistics

    9 February 14:14:45 Photos iMac [582]: check for permissions before running migration

    9 February 14:14:45 Photos iMac [582]: start Library check Permission

    9 February 14:14:46 Photos iMac [582]: end library Permission check, error: (null)

    9 February 14:14:46 [584] Photo Library Migration utility iMac: # cannot load the AddressBook class CNContactNameFormatter

    9 February 14:14:46 utility library Migration iMac Photo [584]: the system DRIVE path is not not across the path: System/Library/CoreServices/Photo Library Migration Utility.app/Contents/Frameworks/ProKit.framework

    9 February 14:14:46 utility library Migration iMac Photo [584]: Error: NSProCommonAssetStorage - initWithPath: no storage file found (null)

    9 February 14:14:46 utility library Migration iMac Photo [584]: PLMU: Open "file:///Users/[username]/Pictures/iPhoto%20Library/" "com.apple.Photos" "Kuvat library" m - t 1.1 (910.8)

    9 February 14:14:47 utility library Migration iMac [584] Photo: PLMU: DFHSMT: 9973 36369 0 10 46352 15080

    9 February 14:14:47 utility library Migration iMac Photo [584]: PLMU: Preflight completed (0,4029 s)

    9 February 14:14:47 [584] Photo Library Migration utility iMac: PLMU: space required calculated (0,3030 s)

    9 February 14:14:47 utility library Migration iMac Photo [584]: PLMU: av = 763772, lib = 15080, ec = 1097, EU = 1645

    9 February 14:14:47 utility library Migration iMac Photo [584]: PLMU: disc check the filled space (0.0031 sec)

    9 February 14:14:47 utility library Migration iMac Photo [584]: PLMU: working copy: file:///Users/[username]/Pictures/Kuvat%20Library.photoslibrary_prepare/, error: Domain = com.apple.Aperture.ErrorDomain error Code = 0 '(null)' UserInfo = {PLMU_MT_DurPreHandoff = 0}

    9 February 14:15:09 iMac mds [63]: b002 (DiskStore.Normal:2382) 1.000023

    9 February 14:15:42 utility library Migration iMac Photo [584]: PLMU: cloning completed (55,5238 s)

    9 February 14:15:42 utility library Migration iMac Photo [584]: PLMU: av = 763772, lib = 15080, ec = 1097, c = 208, EU = 1645

    9 February 14:15:43 utility library Migration iMac [584] Photo: [CIImage initWithCGImage:options] failed because the CGImage sucks

    9 February 14:15:44 - last message repeated 3 times.

    9 February 14:15:44 utility library Migration iMac Photo [584]: PLMU: exception: *-[NSPathStore2 substringToIndex:]: 9223372036854775807 Index out of bounds; stack of length 6 string:)

    0 CoreFoundation 0x00007fff981a3ae2 __exceptionPreprocess + 178

    1 libobjc. A.dylib 0x00007fff94bd3f7e objc_exception_throw + 48

    2 CoreFoundation 0x00007fff981a398d + [print raise: format:] + 205

    3 Foundation 0x00007fff905440ac-[NSString substringToIndex:] + 118

    4 iLifePageLayout 0x000000010292578a + [KHProject (Upgrade) createWithDictionary:photoUIDs:andName:isShell:] + 234

    Photo library 5 Migration utility 0x0000000101610c69 Photo + 1080425 Migration utility library

    Photo library 6 Migration 0 x 0000000101720006 utility Photo + 2191366 Migration utility library

    Photo library 7 Migration utility 0x000000010153be27 Photo + 208423 Migration utility library

    Photo library 8 Migration utility 0x000000010171fb43 Photo + 2190147 Migration utility library

    Photo library 9 Migration utility 0x000000010171e9bb Photo + 2185659 Migration utility library

    Photo library 10 Migration utility 0x00000001017e2981 Photo + 2988417 Migration utility library

    __NSThreadPerformPerform Foundation 11 0x00007fff905aed4b + 279

    12 CoreFoundation 0x00007fff980b35c1 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 17

    13 CoreFoundation 0x00007fff980a541c __CFRunLoopDoSources0 + 556

    14 CoreFoundation 0x00007fff980a493f __CFRunLoopRun + 927

    15 CoreFoundation 0x00007fff980a4338 CFRunLoopRunSpecific + 296

    16 HIToolbox 0x00007fff9ccba935 RunCurrentEventLoopInMode + 235

    17 HIToolbox 0x00007fff9ccba76f ReceiveNextEventCommon + 432

    18 HIToolbox 0x00007fff9ccba5af _BlockUntilNextEventMatchingListInModeWithFilter + 71

    19 AppKit 0x00007fff8a4840ee _DPSNextEvent + 1067

    20 AppKit 0x00007fff8a850943-[NSApplication _nextEventMatchingEventMask:untilDate:inMode: dequeue:] + 454

    21 AppKit 0x00007fff8a479fc8-[NSApplication run] + 682

    22 AppKit 0x00007fff8a3fc520 NSApplicationMain + 1176

    Photo library 23 Migration utility 0x00000001015194df Photo + 66783 Migration utility library

    24 libdyld.dylib 0x00007fff9d9e45ad start + 1

    25?                                 0000000000000002 0 x 0 x 0 + 2

    )

    9 February 14:15:55 utility library Migration iMac Photo [584]: PLMU: av = 763772, lib = 15080, ec = 1097, c = 208, EU = 1645, t = 763772

    9 February 14:15:55 utility library Migration iMac [584] Photo: PLMU: MigrationComplete library URL: (null) error: Domain = com.apple.Aperture.ErrorDomain error Code = 1106 '(null)' UserInfo = {NSUnderlyingError = 0x7f92f3aa5470 {error Code Domain = com.apple.Aperture.ErrorDomain = 2020 '(null)' UserInfo = {exception = NSRangeException}}}

    9 February 14:15:55 Photos [582] iMac: PLMU migration failed: error Domain = com.apple.Aperture.ErrorDomain ID = 1106 '(null)' UserInfo = {NSUnderlyingError = 0x7f91497273e0 {error Code Domain = com.apple.Aperture.ErrorDomain = 2020 '(null)' UserInfo = {exception = NSRangeException}}}

    Any ideas how to fix this problem or do I have just need to export all from iPhoto and re - manually import the photos?

    You could try to rebuild the library with iPhoto Library Manager iPhoto; It can fix libraries, when repair constructed in iPhoto tools are not enough.

    You can download a free trial here: Fat Cat Software

    For the regeneration of the libraries of the free trial version is sufficient.

    If you want to migrate the library rebuilt, then try again.    iPhoto Library Manager will create a copy and leave the original library intact.

  • How to view the monthly/annual statistics in terms of time past/calories burned, broken down by each individual activity such as run elliptical/outside etc. Y at - it a third party application that can help me to collect and display these data?

    How to view the monthly/annual statistics in terms of time past/calories burned, broken down by each individual activity such as run elliptical/outside etc. Y at - it a third party application that can help me to collect and display these data?

    Hello

    It is not currently possible to review the data the application integrated in activity or training on this basis. If you want Apple to consider adding this feature, you can suggest here:

    https://www.Apple.com/feedback/watch.html

    However, health and fitness data from other sources, iPhone, and Apple Watch are registered and grouped within the health on iPhone app. These data can be exported, which you may find useful to track the cumulative progress and/or analyze your activity more in detail.

    IPhone app activity also has a button for sharing (top right of the screen) that allows to share data - including social media, Messages, Mail, Notes, and a printer.

    Include third-party applications that can be useful, for example:

    Access to QS

    -"Access your HealthKit data in a table so you can Explorer using numbers, Excel, R, or any other tool compatible CSV."

    - https://itunes.apple.com/gb/app/qs-access/id920297614?mt=8

    SpectaRun workouts

    -"View from the workouts of your Apple Watch on your iPhone and to export these workouts so you can download them to your favorite online running community."

    - https://itunes.apple.com/gb/app/spectarun-workouts/id991723862?mt=8

    Data can also be exported directly from the application of the health (Health Data > All - Share at the top button on the right).

    Check the descriptions and support resources for third party applications for supported details of import and data analysis features.

    More information:

    Use the activity on your Apple Watch - Apple Support

    Use of the workout on your Apple Watch - Apple Support

    http://www.Apple.com/watch/health-and-fitness/

  • How to import a csv file in a histogram

    Hi, I'm looking for a way to be able to import data from a csv file in LabVIEW 8.6 have it create a histogram chart. I've seen this done before, so I know it's possible, I just need to some resources are started vi get me in the right direction. I have thousands of entries, and it gets teadious importation vi macros QI. Does anyone know of a vi, or the Toolbox that will give me the building blocks to do so. Appreciate any help, thanks, AJ

    Bob, the op clearly LabVIEW 8.6.

    AlphaDog,

    You have a shortened exaple of the data file you?  There are several channels or just one?

    No matter, if you look in mathematics-> palette of probability & statistics, you will find a Histogram.vi that will take a table of data and do a histogram out of it.  To display it, use a chart and change the type of route to be a bar chart.

  • 11g: global statistics, once done block statistics additional

    This is a follow-up question 11g: additional statistics - effects of bad implementation

    In a system, who should use additional statistics, global statistical imagined fault someone intentionally.

    These aggregate statistics seems to stop to move to additional statistics.

    As a special field table user_tab_col_statistics has to mean here.


    See this demo program:

    SET LINESIZE 130
    set serveroutput on
    
    call dbms_output.put_line('drop and re-create table ''T''...');
    
    -- clean up
    DROP TABLE t;
    
    -- create a table with partitions and subpartitions
    CREATE TABLE t (
      tracking_id       NUMBER(10),
      partition_key     DATE,
      subpartition_key  NUMBER(3),
      name              VARCHAR2(20 CHAR),
      status            NUMBER(1)
    )
    ENABLE ROW MOVEMENT
    PARTITION BY RANGE(partition_key) SUBPARTITION BY HASH(subpartition_key)
    SUBPARTITION TEMPLATE 4
    (
      PARTITION P_DATE_TO_20160101 VALUES LESS THAN (to_date('2016-01-01', 'YYYY-MM-DD')),
      PARTITION P_DATE_TO_20160108 VALUES LESS THAN (to_date('2016-01-08', 'YYYY-MM-DD')),
      PARTITION P_DATE_TO_20160115 VALUES LESS THAN (to_date('2016-01-15', 'YYYY-MM-DD')),
      PARTITION P_DATE_OTHER VALUES LESS THAN (MAXVALUE)
    );
    
    CREATE UNIQUE INDEX t_pk ON t(partition_key, subpartition_key, tracking_id);
    
    ALTER TABLE t ADD CONSTRAINT t_pk PRIMARY KEY(partition_key, subpartition_key, tracking_id);
    ALTER TABLE t ADD CONSTRAINT t_partition_key_nn check (partition_key IS NOT NULL);
    ALTER TABLE t ADD CONSTRAINT t_subpartition_key_nn check (subpartition_key IS NOT NULL);
    
    call dbms_output.put_line('populate table ''T''...'); 
    
    -- insert values into table (100 for first 2 partitions, last 2 remain empty)
    BEGIN
      FOR i IN 1..100
      LOOP
        INSERT INTO t VALUES(i,to_date('2015-12-31', 'YYYY-MM-DD'), i, 'test' || to_char(i), MOD(i,4) );
      END LOOP;
    
      FOR i IN 101..200
      LOOP
        INSERT INTO t VALUES(i,to_date('2016-01-07', 'YYYY-MM-DD'), i, 'test2' || to_char(i), MOD(i, 4));
      END LOOP;
      commit;
    END;
    /
    
    -- lock table stats, so that no automatic mechanism of Oracle can disturb us
    call dbms_output.put_line('lock statistics for table ''T''...');
    
    BEGIN
      dbms_stats.lock_table_stats(USER, 'T');
      commit;
    END;
    /
    
    call dbms_output.put_line('delete statistics for table ''T''...');
    
    -- make sure we have no table stats to begin with
    BEGIN
      dbms_stats.delete_table_stats(
        ownname => USER,
        tabname => 'T',
        partname => NULL,
        cascade_columns => TRUE,
        cascade_indexes => TRUE,
        force => TRUE);
      commit;
    END;
    /
    
    begin
      for rec in (select table_name, partition_name from user_tab_partitions where table_name = 'T')
      loop
        dbms_stats.delete_table_stats(
           ownname         => USER,
           tabname         => rec.table_name,
           partname        => rec.partition_name,
           cascade_columns => TRUE,
           cascade_indexes => TRUE,
           force           => TRUE);
     end loop;
     commit;
    end;
    /
    
    call dbms_output.put_line('verify no data avail in user_tab_col_statistics for table ''T''...');
    
    select table_name, column_name, to_char(last_analyzed, 'YYYY-MM-DD HH:MI:SS'), global_stats from user_tab_col_statistics where table_name = 'T';
    
    call dbms_output.put_line('verify no data avail in user_part_col_statistics for table ''T''...');
    
    -- not sure, if this is correct(?!):
    select * from user_part_col_statistics where table_name = 'T' and not (num_distinct is null and low_value is null and high_value is null);
    
    call dbms_output.put_line('''accidentally'' gather global stats in user_tab_col_statistics for table ''T''...');
    
    begin
        dbms_stats.gather_table_stats(
          ownname => USER,
          tabname => 'T',
          partname => null,
          estimate_percent => 1,
          degree => 5,
          block_sample => TRUE,
          granularity => 'GLOBAL',
          cascade => TRUE,
          force => TRUE,
          method_opt => 'FOR ALL COLUMNS SIZE 1'
        );
    end;
    /
    
    call dbms_output.put_line('verify global_stats in user_tab_col_statistics for table ''T''...');
    
    select table_name, column_name, to_char(last_analyzed, 'YYYY-MM-DD HH:MI:SS'), global_stats from user_tab_col_statistics where table_name = 'T';
    
    call dbms_output.put_line('wait 30 seconds...');
    
    call dbms_lock.sleep(30); -- might require to grant access
    
    call dbms_output.put_line('...done');
    
    call dbms_output.put_line('try to update global_stats in user_tab_col_statistics for table ''T'' by partition level statistic updates...');
    
    begin
      for rec in (select table_name, partition_name from user_tab_partitions where table_name = 'T')
      loop
        dbms_stats.gather_table_stats(
          ownname => USER,
          tabname => rec.table_name,
          partname => rec.partition_name,
          estimate_percent => 1,
          degree => 5,
          block_sample => TRUE,
          granularity => 'PARTITION',
          cascade => TRUE,
          force => TRUE,
          method_opt => 'FOR ALL COLUMNS SIZE 1'
        );
        dbms_stats.gather_table_stats(
          ownname => USER,
          tabname => rec.table_name,
          partname => rec.partition_name,
          estimate_percent => 1,
          degree => 5,
          block_sample => TRUE,
          granularity => 'SUBPARTITION',
          cascade => TRUE,
          force => TRUE,
          method_opt => 'FOR ALL COLUMNS SIZE 1');  
      end loop;
    end;
    /
    
    call dbms_output.put_line('re-verify global_stats in user_tab_col_statistics for table ''T'' (check for last_analyzed and global_stats)...');
    
    select table_name, column_name, to_char(last_analyzed, 'YYYY-MM-DD HH:MI:SS'), global_stats from user_tab_col_statistics where table_name = 'T';
    

    Output:

    Call completed.
    
    drop and re-create table 'T'...
    
    
    Table T dropped.
    
    
    Table T created.
    
    
    Unique index T_PK created.
    
    Table T altered.
    
    
    Table T altered.
    
    
    Table T altered.
    
    
    Call completed.
    
    populate table 'T'...
    
    
    PL/SQL procedure successfully completed.
    
    Call completed.
    
    lock statistics for table 'T'...
    
    
    PL/SQL procedure successfully completed.
    
    
    Call completed.
    
    delete statistics for table 'T'...
    
    
    PL/SQL procedure successfully completed.
    
    
    PL/SQL procedure successfully completed.
    
    
    Call completed.
    
    verify no data avail in user_tab_col_statistics for table 'T'...
    
    
    no rows selected
    
    
    
    Call completed.
    
    verify no data avail in user_part_col_statistics for table 'T'...
    
    
    no rows selected
    
    
    Call completed.
    
    'accidentally' gather global stats in user_tab_col_statistics for table 'T'...
    
    
    PL/SQL procedure successfully completed.
    
    
    Call completed.
    
    verify global_stats in user_tab_col_statistics for table 'T'...
    
    
    TABLE_NAME                     COLUMN_NAME                    TO_CHAR(LAST_ANALYZ GLO
    ------------------------------ ------------------------------ ------------------- ---
    T                              TRACKING_ID                    2016-01-28 02:09:31 YES
    T                              PARTITION_KEY                  2016-01-28 02:09:31 YES
    T                              SUBPARTITION_KEY               2016-01-28 02:09:31 YES
    T                              NAME                           2016-01-28 02:09:31 YES
    T                              STATUS                         2016-01-28 02:09:31 YES
    
    Call completed.
    
    wait 30 seconds...
    
    Call completed.
    
    
    Call completed.
    
    ...done
    
    
    Call completed.
    
    try to update global_stats in user_tab_col_statistics for table 'T' by partition level statistic updates...
    
    PL/SQL procedure successfully completed.
    
    
    Call completed.
    
    re-verify global_stats in user_tab_col_statistics for table 'T' (check for last_analyzed and global_stats)...
    
    
    TABLE_NAME                     COLUMN_NAME                    TO_CHAR(LAST_ANALYZ GLO
    ------------------------------ ------------------------------ ------------------- ---
    T                              TRACKING_ID                    2016-01-28 02:09:31 YES
    T                              PARTITION_KEY                  2016-01-28 02:09:31 YES
    T                              SUBPARTITION_KEY               2016-01-28 02:09:31 YES
    T                              NAME                           2016-01-28 02:09:31 YES
    T                              STATUS                         2016-01-28 02:09:31 YES
    
    

    seems that the solution is to use the parameter cascade_parts-the online of FALSE:

    begin
      dbms_stats.delete_table_stats(
        ownname => USER,
        tabname => 'T',
        partname => NULL,
        cascade_parts => FALSE, -- this is important
        cascade_columns => TRUE,
        cascade_indexes => TRUE,
        force => TRUE);
    end;
    
  • DataPump: Rows in tables do not match after importation

    Hello

    I have exported a schema 'ORAMSCA' and imported in the form of schema 'ORAMSCA_TEST151223 '. I found the difference between the rows of tables. Could you please advice.

    Here are the details.

    ORAMSCA_TEST151223 ORAMSCA
    COUNTY OF COUNTY OF TABLE_NAME
    ------------------------------ ------------- ------------
    2 952 3 367 OM_AUDIT_TRAIL
    33 40 OM_CONFIG_OPTIONS
    3 456 3 456 OM_COUNTRY_STATES
    86 91 OM_ENTITY_MENU
    64 69 OM_FUNFACTS
    81 61 OM_INSTANCES
    OM_JOBS 139 111
    OM_JOB_LOGS 226 132
    37 19 OM_JOB_PARAMS
    15 15 OM_LICENSES
    1 289 1 594 OM_LOGIN_HISTORY
    OM_LOOKUP_CODES 900 904
    31 31 OM_LOOKUP_TYPES
    9 625 8 225 OM_ORG_ORGANIZATIONS
    36 36 OM_ROLE_RIGHTS
    29 48 OM_USERS
    728 1 983 OM_USER_ENTITY_MENU_ACCESS
    OM_USER_ORGANIZATION_ACCESS 40 156


    Exported log file:

    ;;;

    Export: Release 11.2.0.1.0 - Production on Fri Dec 18 09:22:16 2015

    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

    ;;;

    Connected to: Oracle Database 11 g Enterprise Edition Release 11.2.0.1.0 - 64 bit Production

    With partitioning, OLAP, Data Mining and Real Application Testing options

    Start "SYSTEM". "" SYS_EXPORT_SCHEMA_01 ": System / * schema = oramsca = ORAMSCA_TEST1.dmp = expdp.log = DUMP directory logfile dumpfile

    Current estimation using BLOCKS method...

    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

    Total estimation using BLOCKS method: 2,937 MB

    Processing object type SCHEMA_EXPORT/USER

    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

    Processing object type SCHEMA_EXPORT/ROLE_GRANT

    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

    Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA

    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

    Processing object type SCHEMA_EXPORT/DB_LINK

    Object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE of treatment

    Object type SCHEMA_EXPORT/TABLE/TABLE processing

    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX

    Object type SCHEMA_EXPORT/TABLE/CONSTRAINT/treatment

    Object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS of treatment

    Object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC of treatment

    Object type SCHEMA_EXPORT/FUNCTION/treatment

    Object type SCHEMA_EXPORT/PROCEDURE/treatment PROCEDURE

    Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC

    Object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION of treatment

    Object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE processing

    Object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY of treatment

    Processing object type SCHEMA_EXPORT/JAVA_SOURCE/JAVA_SOURCE

    Processing object type SCHEMA_EXPORT/JAVA_CLASS/JAVA_CLASS

    Object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT of treatment

    Object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS treatment

    . . exported "ORAMSCA." "" OM_ORG_ORGANIZATIONS "8225 lines 620,3 KB

    . . exported "ORAMSCA." "" OM_AUDIT_TRAIL "3283 lines 285,2 KB

    . . exported "ORAMSCA." "' OM_ENTITY_MENU ' lines of 145,2 90 KB

    . . exported "ORAMSCA." "' OM_COUNTRY_STATES ' 3456 lines 186,1 KB

    . . exported "ORAMSCA." "" OM_CONFIG_OPTIONS "the 40 lines 67,10 KB

    . . exported "ORAMSCA." "' OM_LOGIN_HISTORY ' 118.2 lines 1525 KB

    . . exported "ORAMSCA." "' OM_LOOKUP_CODES ' 904 lines 68,61 KB

    . . exported "ORAMSCA." "' OM_USER_ENTITY_MENU_ACCESS ' 78.60 lines 1644 KB

    . . exported "ORAMSCA." "' OM_FUNFACTS ' lines of 15,06 67 KB

    . . exported "ORAMSCA." "' OM_INSTANCES ' lines of 17,37 61 KB

    . . exported "ORAMSCA." "' OM_JOBS ' 111 lines 19.54 KB

    . . exported "ORAMSCA." "' OM_JOB_LOGS ' 132 lines 19,02 KB

    . . exported "ORAMSCA." "' OM_JOB_PARAMS ' Ko 9,726, 19 ranks

    . . exported "ORAMSCA." "" OM_LICENSES "15 lines 10,25 KB

    . . exported "ORAMSCA." "' OM_LOOKUP_TYPES ' 31 lines 11.50 KB

    . . exported "ORAMSCA." "' OM_ROLE_RIGHTS ' 36 lines 14,43 KB

    . . exported "ORAMSCA." "' OM_USERS ' 44 lines 21,57 KB

    . . exported "ORAMSCA." "' OM_USER_ORGANIZATION_ACCESS ' 147 lines 15,56 KB

    Main table 'SYSTEM '. "" SYS_EXPORT_SCHEMA_01 "properly load/unloaded

    ******************************************************************************

    Empty the file system set. SYS_EXPORT_SCHEMA_01 is:

    /U01/oracle11/dump/ORAMSCA_TEST1.dmp

    Work 'SYSTEM '. "" SYS_EXPORT_SCHEMA_01 "carried out at 09:23:14

    Import a log file:

    ;;;

    Import: Free 11.2.0.1.0 - Production Wed Dec 23 01:00:58 2015

    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

    ;;;

    Connected to: Oracle Database 11 g Enterprise Edition Release 11.2.0.1.0 - 64 bit Production

    With partitioning, OLAP, Data Mining and Real Application Testing options

    Main table 'SYSTEM '. "' SYS_IMPORT_FULL_01 ' properly load/unloaded

    Start "SYSTEM". "" SYS_IMPORT_FULL_01 ": System / * Directory = DUMP dumpfile = logfile = impdp.log remap_schema = ORAMSCA:ORAMSCA_TEST151223 remap_tablespace = ORAMSCA:ORAMSCA_TEST151223 expdp_ORAMSCA_151221.dmp

    Processing object type SCHEMA_EXPORT/USER

    ORA-31684: USER object Type: 'ORAMSCA_TEST151223' already exists

    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

    Processing object type SCHEMA_EXPORT/ROLE_GRANT

    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

    Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA

    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

    Processing object type SCHEMA_EXPORT/DB_LINK

    Object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE of treatment

    Object type SCHEMA_EXPORT/TABLE/TABLE processing

    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

    . . imported "ORAMSCA_TEST151223." "' OM_ORG_ORGANIZATIONS ' 9625 lines 724,5 KB

    . . imported "ORAMSCA_TEST151223." "' OM_AUDIT_TRAIL ' 258.6 lines 2952 KB

    . . imported "ORAMSCA_TEST151223." "' OM_ENTITY_MENU ' lines of 166.7 86 KB

    . . imported "ORAMSCA_TEST151223." "' OM_CONFIG_OPTIONS ' 33 lines 59,26 KB

    . . imported "ORAMSCA_TEST151223." "' OM_COUNTRY_STATES ' 3456 lines 186,1 KB

    . . imported "ORAMSCA_TEST151223." "' OM_LOGIN_HISTORY ' 102.8 lines 1289 KB

    . . imported "ORAMSCA_TEST151223." "' OM_LOOKUP_CODES ' 900 lines 68,36 KB

    . . imported "ORAMSCA_TEST151223." "' OM_USER_ENTITY_MENU_ACCESS ' 728 lines 39,46 KB

    . . imported "ORAMSCA_TEST151223." "' OM_FUNFACTS ' lines of 14.87 64 KB

    . . imported "ORAMSCA_TEST151223." "' OM_INSTANCES ' lines of 19.66 81 KB

    . . imported "ORAMSCA_TEST151223." "' OM_JOBS ' 139 lines 22,07 KB

    . . imported "ORAMSCA_TEST151223." "' OM_JOB_LOGS ' 226 KB 25,73 lines

    . . imported "ORAMSCA_TEST151223." "' OM_JOB_PARAMS ' 37 lines 10,74 KB

    . . imported "ORAMSCA_TEST151223." "" OM_LICENSES "15 lines 10,25 KB

    . . imported "ORAMSCA_TEST151223." "' OM_LOOKUP_TYPES ' 31 lines 11.50 KB

    . . imported "ORAMSCA_TEST151223." "' OM_ROLE_RIGHTS ' 36 lines 14,43 KB

    . . imported "ORAMSCA_TEST151223." "' OM_USERS ' 29 ranks 19.38 KB

    . . imported "ORAMSCA_TEST151223." "" OM_USER_ORGANIZATION_ACCESS "the 40 lines 10.53 KB

    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX

    Object type SCHEMA_EXPORT/TABLE/CONSTRAINT/treatment

    Object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS of treatment

    Object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC of treatment

    Object type SCHEMA_EXPORT/FUNCTION/treatment

    Object type SCHEMA_EXPORT/PROCEDURE/treatment PROCEDURE

    Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC

    Object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION of treatment

    ORA-39083: Type than alter_function cannot be created with the object error:

    ORA-04052: error occurred when searching to the top of the remote object APPS. FND_APPLICATION_ALL_VIEW@SYSTEM_LINK_OM_VISMA

    ORA-00604: an error has occurred at the SQL level recursive 3

    ORA-12154: TNS: could not resolve the connect identifier specified

    Because sql is:

    ALTER FUNCTION "ORAMSCA_TEST151223". "" OM_APPLICATION_VISMA "PLSQL_OPTIMIZE_LEVEL = 2 PLSQL_CODE_TYPE = COMPILATION INTERPRETED PLSQL_DEBUG = FALSE PLSCOPE_SETTINGS = ' IDENTIFIERS: NO ' S REUSE

    ORA-39083: Type than alter_function cannot be created with the object error:

    ORA-04052: error occurred when searching to the top of the remote object APPS.GL_CODE_COMBINATIONS_V@SYSTEM_LINK_OM_VISMA

    ORA-00604: an error has occurred at the SQL level recursive 3

    ORA-12154: TNS: could not resolve the connect identifier specified

    Because sql is:

    ALTER FUNCTION "ORAMSCA_TEST151223". "" OM_CODE_COMBINATION_VISMA "PLSQL_OPTIMIZE_LEVEL = 2 PLSQL_CODE_TYPE = COMPILATION INTERPRETED PLSQL_DEBUG = FALSE PLSCOPE_SETTINGS = ' IDENTIFIERS: NO ' REUS

    Object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE processing

    Object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY of treatment

    Processing object type SCHEMA_EXPORT/JAVA_SOURCE/JAVA_SOURCE

    Processing object type SCHEMA_EXPORT/JAVA_CLASS/JAVA_CLASS

    Object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT of treatment

    Object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS treatment

    Work 'SYSTEM '. "" SYS_IMPORT_FULL_01 "finished with 3 errors at 01:01:58

    Thank you

    Jeremy.

    Seems that you do not import the file that you exported?

    Export: dumpfile = ORAMSCA_TEST1.dmp

    Import: dumpfile = expdp_ORAMSCA_151221.dmp

    AJ

  • ORA-39117 with 'MDSYS. "' SDO_GEORASTER ' when you perform an import in 12cR1 non - CBD

    Hello

    I'm trying to import a table from a database server Oracle 11 GR 2 (11.2.0.3) to a 12cR1 (12.1.0.2) Oracle database (non - CBD).

    Here's how I did the export of the source:

    SQL > show the db_name parameter

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    db_name string ZEUS

    SQL > select banner version of v$.

    BANNER

    --------------------------------------------------------------------------------

    Oracle Database 11 g Enterprise Edition Release 11.2.0.3.0 - 64 bit Production

    PL/SQL Release 11.2.0.3.0 - Production

    CORE Production 11.2.0.3.0

    AMT for Linux: Version 11.2.0.3.0 - Production

    NLSRTL Version 11.2.0.3.0 - Production

    parfile: exp_missing_SPATIAL_SCHEMAS_ZEUS11g.par

    tables is MANGO. ORTHO_PHOTOS, MVDEMO_NATURALEARTH. WORLD_RASTER

    Directory = DATA_PUMP_DIR

    REUSE_DUMPFILES = y

    exclude = statistics

    dumpfile = expdp_missing_SPATIAL_SCHEMAS_ZEUS11g.dmp

    logfile = logexpdp_missing_SPATIAL_SCHEMAS_ZEUS11g.log

    expdp system parfile = exp_missing_SPATIAL_SCHEMAS_ZEUS11g.par

    Here's how I did the import to the target:

    SQL > show the db_name parameter

    VALUE OF TYPE NAME

    ------------------------------------ ----------- ------------------------------

    db_name string ZEUS

    SQL > select banner version of v$.

    BANNER

    --------------------------------------------------------------------------------

    Database Oracle 12 c Enterprise Edition Release 12.1.0.2.0 - 64 bit Production

    PL/SQL Release 12.1.0.2.0 - Production

    CORE Production 12.1.0.2.0

    AMT for Linux: Version 12.1.0.2.0 - Production

    NLSRTL Version 12.1.0.2.0 - Production

    parfile: imp_missing_SPATIAL_SCHEMAS_ZEUS12c.par

    tables is MANGO. ORTHO_PHOTOS, MVDEMO_NATURALEARTH. WORLD_RASTER

    Directory = TEMP_DUMPS

    dumpfile = expdp_missing_SPATIAL_SCHEMAS_ZEUS11g.dmp

    logfile = logimpdp_missing_SPATIAL_SCHEMAS_ZEUS12c.log

    Impdp system parfile = imp_missing_SPATIAL_SCHEMAS_ZEUS12c.par

    I find myself with this kind of error:

    ORA-39117: Type necessary to create the table is not included in this operation. Because sql is:

    CREATE TABLE "MANGO". "" ORTHO_PHOTOS "(NUMBER OF 'FID', 'NAME"VARCHAR2 (256 BYTE), VARCHAR2 (256 BYTE) 'TYPE', 'IMAGE' "MDSYS"." SDO_GEORASTER')

    SEGMENT OF PCTFREE, PCTUSED 10 40 INITRANS, MAXTRANS 1 255 NOCOMPRESS LOGGING CREATION

    STORAGE (INITIAL 65536 THEN 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS USER_TABLES DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT 1)

    TABLESPACE...

    Yet, the 'MDSYS. "' Data SDO_GEORASTER type" exist in the source and target database.

    Someone has such experience, may be related to a bug?

    Thanks in advance for any information.

    Kind regards

    Hi Laury.

    You say that the object exists, but you check its executable?

    https://docs.Oracle.com/database/121/GEORS/release_changes.htm#GEORS1382

    I guess that with the release of 12 c, Oracle wish you to specifically take action showing that you have a license Oracle Spatial.  Otherwise you can not use the georaster with just the license locator object.

    See you soon,.

    Paul

  • DataPump import error

    Friends

    DB: 11 GR 2

    OS: Linux

    After a few expertise help I use network_link for schema importation.

    None standby available db

    Trying to go down below in the correct order.

    What I'm doing: (import the schema and split partition on the destination before you import the data, indexes, constraints)

    1 import-1: import metadata_only

    2 split the destination partition

    3 import-2: import data_only

    4 3-import: import indexes, constraint

    #1 - #3 above completed without error, #4 fails with error below

    Several errors:

    ==========

    ORA-39083: Type what INDEX failed to create object error:

    ORA-14024: number of LOCAL index partitions must be equal to that of the underlying table

    ORA-39083: INDEX_STATISTICS object Type cannot be created with the error

    ORA-20005: Statistics object are locked (statetype = ALL)

    Use the Import/split commands:

    ====================

    #1 import-1: import metadata_only

    DIRECTORY = DATA_PUMP_REFRESH

    LOGFILE = IMP1. JOURNAL

    SCHEMAS = SCOTT

    NETWORK_LINK = DBLINK. SCOTT

    CONTENT = METADATA_ONLY

    EXCLUDE = INDEX, CONSTRAINT

    #2 Split partition of destination for five tables

    Division part_2015 in monthly partition for 7 months

    ALTER TABLE SCOTT. CUSTOMER

    SPLIT PARTITON PART_2015

    TO (TO_DATE('20150201','YYYYMMDD'))

    EN)

    PARTITION PART_2015_01 TABLESPACE SCOTT_MONTHLY,

    PARTITION PART_2015 TABLESPACE SCOTT_MONTHLY)

    /

    ....

    above split was done for 7 months

    #3 import-2: import data_only

    DIRECTORY = DATA_PUMP_REFRESH

    LOGFILE = IMP2. JOURNAL

    SCHEMAS = SCOTT

    NETWORK_LINK = DBLINK. SCOTT

    CONTENT = DATA_ONLY

    #4 import-3: index of import, forced

    DIRECTORY = DATA_PUMP_REFRESH

    LOGFILE = IMP3. JOURNAL

    SCHEMAS = SCOTT

    NETWORK_LINK = DBLINK. SCOTT

    INCLUDE = INDEX, CONSTRAINT

    CONTENT = ALL

    It took almost seven hours.

    Issues related to the:

    ========

    1 exceeds approach is correct or I am missing something here?

    2. any suggestions?

    3. I did not update index index / global dividing partition, was this necessary?

    4. somehow I think I need to exclude his stats to the import-3 and do separately but collecting stats for 300 GB take really long, is - it true to collect statistics separately to avoid his stats lock error?

    5. given that it was only test I do not have the locked source schema, it will make a difference?

    Appreciate any feedback comments, entries and experts

    You did an export or you couldn't import the tables, indexes, and data.  The dump file contains the DDL of the database of the source index.  you have changed the definition of the table once you have imported the metadata that caused the DDL stored for indexes associated with make it invalid.

    1. I expect that the overall index should be good, it's for the index create instructions that need to be re-written.

    2. it would not be an easy task; as you will have to user_tab_partitions query to get the partition names and use them to generate index partition names.  You should also do this in PL/SQL to browse the partition names to create the correct instructions for all indexes the.  Of course, we invite you to give that a try.

    3. I would not export statistics can be generated when the indexes are created.  You shared partitions so the imported table stats no longer apply.

    4. as stated above, I would exclude statistical entirely for export.

    5. I would like to ensure that the partitions of split have current statistics.  Create the index will create fresh statistics on those automatically.

    David Fitzjarrell

  • What is the main difference between IMPORT AND EXPORT

    What is the main difference between IMPORT AND EXPORT using Toad in the production site

    EXP FILE = E:\A\ABC USERNAME/PASSWORD@DATABASENAME. DMP (LOCATION. JOURNAL OF DMP) = E:\A\ABC. (LOCATION. NEWSPAPER OWNER) = USERNAME GRANTS = STATISTICS N = COMPLIANT NONE = Y

    FILE USERNAME/PASSWORD@DATABASENAME EXP = LOCATION. JOURNAL OF DMP = LOCATION. JOURNAL FROMUSER = USERNAME = USERNAME GRANTS TOUSER = STATISTICS = NONE N IGNORE = Y

  • can I exclude statistics and index both at the same time

    Hi people,

    I do big table re-organization, so my question is can I exclude statistics and index at the same time

    I am planing this activity as below, please advise me if I'm wrong.

    (1) create table xx_031114 in select * from xx;   (it's like the precautionary measures)

    (2) export the table xxx using its owner the user yy

    yy/passwd@tnsnames = DIRECTORY expdp export DUMPFILE = xx.dmp = xx LOGFILE = xx.log tables

    (3) to truncate the table or drop table

    SQL > truncate table xx;

    SQL > drop table xx;

    (4) import the table with metadata without statistics & index as below

    Impdp yy/passwd@tnsnames DIRECTORY = export statistical DUMPFILE = xx.dmp = xx = exclude TABLES, index CONTENT = METADATA_ONLY LOGFILE = imp_031114.log (import only meta data without statistics & index)


    Question 1: can I exclude the statistics and indexes at once?

    Impdp yy/passwd@tnsnames DIRECTORY = export of TABLES DUMPFILE = xx.dmp = xx CONTENT = exclude LOGFILE = imp_dataonly_031114.log index = DATA_ONLY (import data only without index)

    Impdp yy/passwd@tnsnames DIRECTORY = export of TABLES DUMPFILE = xx.dmp = xx include = index (import only indexes)

    2 question: can I import the indexes only as above?

    Thank you and best regards.

    Younus

    for your question: you use the exclude options separate to exclude indexes and statistics

    Question 2: directory of the user/password impdp dumpfile you_dir = your_dump = include = INDEX

    Hope this helps

    concerning

    Pravin

  • Import of DataPump

    Hi all

    We are 2008R2 in windows server with 2 node RAC v11.2.0.3. My client wants data from database of production in SYNC with staging database.  I choose to use the data pump import. The process that I follow as below

    (1) export with FULL = Y X DB production

    (2) disable all constraints in staging XDB

    (3) save all grants and privileges of staging XDB

    4 run under import controls

    Impdp 'sys/XXXX@XDB as sysdba' CONTNENT = all THE directory = dumpfile XDB_DATAPUMP = exp - XDB.dmp logfile = imp - parallel XDB.log = cluster 4 = ESTIMATE of N = STATISTICS TABLE_EXISTS_ACTION = REPLACE

    My question is

    (1) what I have to analyze all the clues after importation, even if I use ESTIMATE = STATISTICS?

    (2) is there main steps I need to do before or after import?

    Please give me some information. Help, please.

    I expect that

    RMAN > DUPLICATE DATABASE

    to be faster

  • Tablespace level export, import of schema level - is it possible?

    Hello

    If I have an export level DataPump tablespace (played to = < the tablespaces list > STORAGE space), it is possible to import only tables and dependent objects to a specific schema residing in exported tablespaces? DB version is 11.2.0.3.0. According to the documentation, it should be possible: http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_import.htm#i1011943 : a schema import is specified using the SCHEMAS. Source can be a full, table, tablespace, or schema mode export dump file set or another database.  

    Perform a quick test seems, however, that it is not so:

    (1) source DB - I have two tablespaces (TS1, TS2) and two schemas (USER1, USER2):

    SQL > select nom_segment, nom_tablespace, segment_type, owner

    from dba_segments

    where owner in ("USER1", "User2");

    2 3

    OWNER NOM_SEGMENT SEGMENT_TYPE TABLESPACE_NAME

    ------ --------------- ------------------ ----------------

    USER1 UQ_1 INDEX TS1

    USER1 T2 TABLE TS1

    USER1 T1 TABLE TS1

    USER2 T4 TABLE TS2

    USER2 T3 TABLE TS2

    (2) I am not a tablespace level to export:

    Expdp system directory $ = dp_dir = ts1 tablespaces, ts2 dumpfile = test.dmp

    Export: Release 11.2.0.3.0 - Production on Fri Jul 11 14:02:54 2014

    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

    Password:

    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production

    With partitioning, OLAP, Data Mining and Real Application Testing options

    Start "SYSTEM". "" SYS_EXPORT_TABLESPACE_01 ": System / * Directory = dp_dir tablespaces = ts1, ts2 dumpfile = test.dmp

    Current estimation using BLOCKS method...

    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

    Total estimation using BLOCKS method: 256 KB

    Object type TABLE_EXPORT/TABLE/TABLE processing

    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX

    Object type TABLE_EXPORT/TABLE/CONSTRAINT/treatment

    Object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS of treatment

    . . exported "USER1". "" T1 "5,007 KB 1 lines

    . . exported "USER1". "" T2 "5,007 KB 1 lines

    . . exported "user2". "" T3 "5,007 KB 1 lines

    . . exported "user2". "" T4 "5,007 KB 1 lines

    Main table 'SYSTEM '. "' SYS_EXPORT_TABLESPACE_01 ' properly load/unloaded

    ******************************************************************************

    "(3) I'm trying to import only the objects belonging to User1 and I get the 'ORA-39039: schema '(' USER1')' expression contains no valid schema" error: "

    Impdp system directory $ = dp_dir patterns = USER1 dumpfile = test.dmp

    Import: Release 11.2.0.3.0 - Production on Fri Jul 11 14:05:15 2014

    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

    Password:

    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production

    With partitioning, OLAP, Data Mining and Real Application Testing options

    ORA-31655: no data or metadata of objects selected for employment

    ORA-39039: pattern Expression "(' USER1')" contains no valid schema.

    (4) However, the dump file contains clearly the owner of the tables:

    Impdp system directory = dp_dir dumpfile = test.dmp sqlfile = imp_dump.txt

    excerpt from imp_dump.txt:

    -path of the new object type: TABLE_EXPORT/TABLE/TABLE

    CREATE TABLE "USER1". "" T1 ".

    ("DUMMY" VARCHAR2 (1 BYTE)

    )

    So is it possible to somehow filter the objects belonging to a certain pattern?

    Thanks in advance for any suggestions.

    Swear

    Hi swear,.

    This led a small survey of me I thought I was worthy of a blog in the end...

    Oracle DBA Blog 2.0: A datapump bug or a feature and an obscure workaround

    Initially, I thought that you made a mistake, but that doesn't seem to be the way it behaves. I've included a few possible solutions - see if one of them responds to what you want to do...

    See you soon,.

    Rich

  • Import the partial full dump dump

    Dear all,

    Use

    SQL> select * from v$version;
    
    
    BANNER
    ----------------------------------------------------------------
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE    10.2.0.1.0      Production
    TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    
    
    
    

    I have the requirement according to which, because of the size huge and taking more than 8 hours to take dump using expdp.

    So I intend to take the weekend and partial discharge which is (a few tables were score in 3 parts, which has more than 2 GB as part1, part2, part3. Part3 always have the latest data (maxvalue).

    I used to backup all tables (tables of partitions not) with part3 partition of the table data.

    Above scenario works very well.

    Problem in the following scenario is

    Now I must complete emptying the weekend and importation of partial discharge on the day of the week on DR.

    Full dump works great.

    When you import partial discharge I need partial import the latest data all tables (no partition) without distributing existing data (Part1, Part2 partition the data)

    Partial import with the option append

    Starting "SCOTT"."SYS_IMPORT_FULL_01":  scott/******** REMAP_SCHEMA=scott:scott1 directory=DUMPDIR dumpfile=SCOTT_PART_09-07-14.DMP logfile=imp_scott_09-07-14.log TRANSFORM=SEGMENT_ATTRIBUTES:n TABLE_EXISTS_ACTION=append EXCLUDE=CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/TABLE
    ORA-39152: Table "SCOTT1"."TEMP" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
    ORA-39152: Table "SCOTT1"."DEPT" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
    ORA-39152: Table "SCOTT1"."EMP" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
    ORA-39152: Table "SCOTT1"."BONUS" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
    ORA-39152: Table "SCOTT1"."SALGRADE" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    ORA-31693: Table data object "SCOTT1"."TEMP":"TEMP_PART2" failed to load/unload and is being skipped due to error:
    ORA-00001: unique constraint (SCOTT1.TEMP_BAK_PK) violated
    ORA-31693: Table data object "SCOTT1"."DEPT" failed to load/unload and is being skipped due to error:
    ORA-00001: unique constraint (SCOTT1.PK_DEPT) violated
    ORA-31693: Table data object "SCOTT1"."EMP" failed to load/unload and is being skipped due to error:
    ORA-00001: unique constraint (SCOTT1.PK_EMP) violated
    . . imported "SCOTT1"."SALGRADE"                         5.585 KB       5 rows
    . . imported "SCOTT1"."BONUS"                                0 KB       0 rows
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    Job "SCOTT"."SYS_IMPORT_FULL_01" completed with 8 error(s) at 15:12:59
    

    APPEND Option doesn't work or not by adding new records to the table. Ignored his due to high error. since I have used EXCLUDE = option of CONSTRAINT as well.

    I tried to REPLACE the option that removes the existing partition data (part1, Part2) but my requirement is Part1, Part2 should not be removed from the table.

    Kindly guide me to resolve the above error.

    Thanks and greetings

    Saami.

    Hello

    The line from the log file explains why

    "Data will be added to the existing table, but dependent on all the metadata will be ignored due to table_exists_action to append.

    "So even though the stats stuff are mentioned in its log file actually be skipped - it makes sense that you can load a single line in a table of 1 billion line - if you don't want the stats to say ' 1 row".  The statistics that you are just for what is in the dumpfile - they are not relevant for the existing table + dumpfile (in the general case anyway - "do you know" well stats are relevant as you truncated the table first).

    You could make yet another import after it and just include statistical =... to just make these statistics?

    See you soon,.

    Rich

  • Data Pump Export/Import

    Hello Forum,

    I have a question regarding imports and exports of data pump, perhaps what I should already know.

    I need to empty a table that has about 200 million lines, that I need to get rid of about three quarters of data.

    My intention is to use the data pump to export the table and and indexes and constraints etc..

    The table has no relationship to any other table, it is composed of approximately 8 columns with constraints not null.

    My plan is

    1 truncate table

    2. disable or remove index

    3 leave the constraints in place?

    4. use data pump to import a lines to keep.

    My question

    will be my clues and imported too much constraints I want to import only a subset of my exported table?

    or

    If I dropped the table after truncation, I'll be able to import my table and indexes, even if I use the sub as part of my statement to import query functionality?

    My table using the query sub in data pump functionality must exist in the database before doing the import

    or handful of data pump import as usual IE will create table indices grants and statistics etc.?

    Thank you for your comments.

    Concerning

    Your approach is ineffective.

    What you need to do is to

    create the table in select foo * bar where the

    bar of truncate table;

    Insert / * + APPEND * / into select bar * foo.

    Rebuild the indexes on the table.

    Fact.

    This whole thing with expdp and impdp is only a waste of resources. My approach generates redo a minimum.

    ----------

    Sybrand Bakker

    Senior Oracle DBA

Maybe you are looking for