problem in expdp/impdp

Hi, I'm currently expdp/impdp but every time the same mistakes. OS win Server 03 on ORACLE 10 g.

errors:

C:\Documents and Settings\Administrateur > impdp rΘpertoire "/ as sysdba" = "data_pump_dir" dumpfile = logfile = scott.log'scott.dmp ' '

Import: Release 10.2.0.3.0 - Production on Tuesday, March 25, 2014 10:43:18

Copyright (c) 2003, 2005, Oracle.  All rights reserved.

Connected to: Oracle Database 10 g Enterprise Edition Release 10.2.0.3.0 - Production

With partitioning, OLAP and Data Mining options

ORA-31626: there is no job

ORA-04063: package body 'SYS. DBMS_INTERNAL_LOGSTDBY"contains errors

ORA-06508: PL/SQL: called program unit is not found: 'SYS. "DBMS_INTERNAL_LOGSTDBY '

ORA-06512: at "SYS." "KUPV$ FT", line 834

ORA-04063: package body 'SYS. DBMS_INTERNAL_LOGSTDBY"contains errors

ORA-06508: PL/SQL: called program unit is not found: 'SYS. DBMS_INTERNAL_LOGSTDBY ".

SQL > alter the body of package DBMS_INTERNAL_LOGSTDBY compilation;

WARNING: The bodies of Package modified with compilation errors.

SQL > show error

PACKAGE BODY DBMS_INTERNAL_LOGSTDBY errors:

LINE/COL ERROR

-------- -----------------------------------------------------------------

2563/4 PL/SQL: statement ignored

2563/20 PL/SQL: ORA-00942: table or view does not exist

2625/3 PL/SQL: statement ignored

2625/19 PL/SQL: ORA-00942: table or view does not exist

2809/2 PL/SQL: statement ignored

2809/13 PL/SQL: ORA-00942: table or view does not exist

HELP ME PLEASE

Since you're using Oracle 10 g version...

Check if you have columns that contain "TEMESTAMP (6) ' data by using this query type:"

Select dba_tab_columns owner, table_name, column_name data_type where column_name = 'TIMESTAMP' and data_type like '% TIMESTAMP ';

If the previous query lists for you a list of tables, you must do the following for each table:

ALTER table system.table_name change (date timestamp);

Then recompile the package "DBMS_INTERNAL_LOGSTDBY" again.

I hope this helps.

Kind regards

Tags: Database

Similar Questions

  • expdp, impdp Grid 11.2 and ACFS

    Version of the grid: 11.2.0.3
    OS: Red Hat Enterprise Linux 5.4

    A few months, our cluster of CARS, while taking a backup expdp in a local Linux file system, I got some errors. Remember the error code or the script now that I had too much work this day. The problem was solved when we used a location of file system ACFS as the directory for expdp object.

    Today, in the same cluster RAC, to reproduce that issue, I tested it with an expdp backup in a local Linux format
    file system (/ home/oracle/pumpDir) and the expdp finished without any problems.

    Someone at - it all met problems with expdp, impdp in environment RAC cluster in reason to use a local Linux file system?

    The error would be more probably "file not found".

    expdp/impdp (datapump) will begin in several parallel processes on the cluster. If all nodes cannot find the 'local' directory, you will get the above error.

    Bottom line, the parallel processes expdp may or may not be on a single node, so the 'directory' MUST be accessible across the cluster.

  • expdp/impdp method in the database upgrade from 11.2.0.4 to 12.1.0.2 an error in logic of "command line"?

    Hi DBA:

    We are evaluating different methods of upgrading 11 GR 2 DB to 12 c (12.1.0.2) this year, it is very important for us to use the right method.  It was reported that when DB version was 9i and has been updated to 11g, there were some order of tables 'line' was wrong when execute SELECT statements in application (Java application) - at the moment I didn't if it is ' select * from... "or any specific part with a"where"clause, and what type of data was wrong (maybe sequence CLOB data, etc.).

    I think I know, ' exp/imp', "expdp/impdp" logical backup of DB methods.  Therefore, Oracle automatically import data and fix it in each table.  If no error in impdp connects after it's over, should not be any mistake, right?

    If the use of the method of 'creates a new data + expdp/impdp basis for schemas user' 11g 12 c port data, questions:

    1. this method will lead to erroneous sequence data when ordering?  If so, how to re - generate the sequence number?

    2. this method can bring CLOB, BLOB data and without any errors?  If error, how to detect and repair?

    I use the same method in the 10.2.0.5 to 11g port data, no problem.

    I'll keep this thread posted after getting more information from application.

    Thank you very much!

    Dania

    INSERT... SELECT, CREATE TABLE in the SELECT and Export-Import will all "re-organize" the table by assigning new blocks.

    In place--update (using DBUA or command line) don't re-create the user tables (although it DOES not rebuild the data dictionary tables).  Therefore, does not change the physical location of lines for user tables.  However, there is no guarantee that a further reason for INSERT, DELETE, UPDATE instructions does not change the physical location of ranks.  Nor is it a guarantee that a SELECT statement will return always stored in the same order unless an ORDER BY is specified explicitly.

    If you export the data while it is still used, you don't avoid update sequences in the source database.  Usually results in a mismatching number sequence between the source database and the new database (imported). In fact, even the numbers may be different, unless you use CONSISTENT or FLASHBACK_SCN.  (These two options in exp / expdp do NOT prevent update sequences).   The right way to manage sequences is to rebuild or increase them in data imported with the values in the data source, once the import is complete.

    Hemant K Collette

  • EXPDP/IMPDP run simultaneously

    There are the predictable problems running expdp and impdp simultaneously? I guess that 10g can handle, but maybe he can be a sort of blocking issues. Purely theoretical situation.

    Captain Obvious: "be nice, it is a beginner.

    I am impressed by the speed at which you answered. This is my first post in eight years. I think you've earned your points on this one.

    I think that we are all here to help each other and learn. Its a nice place to share and learn both.

    Anand

  • Accelerate exp/imp or expdp/impdp

    Hello

    Is it possible to speed up and exp/imp or expdp/impdp that already works? Is is possible to speed up a run RMAN backup or restore RMAN process?

    Kind regards

    007

    To accelerate a datapump export market and import you can attach to employment and increase the level of parallelism... impdp attach = job

    I don't know any way to speed up a running RMAN backup.

    To expedite an RMAN restore, you can kill the restoration and re - run using several channels.  The restoration should take up where he left off and can run faster with many channels.  It is relevant only if you have several items from backup.

  • expdp/impdp: constraints in the child-Parent relationship

    Hello

    I have a table parent1 and child1, child2 and chld3 tables have foreign key created on this parent1.

    Now, I want to do a delete on parent1. But as the number of records is very high on parent1, we go with expdp / impdp with option querry.

    I took the expdp parent1 level query. Now I dropped parent1 with option of cascade constraints and all foreign keys created by child1, 2 and 3 that parent1 references are automatically deleted.

    Now, if I have the impdp to the query of fire level dump file, are these foreign key constraints will be created automatically on child1, 2 and 3 or I need to manually recreate it?

    Kind regards

    ANU

    Hello
    The CF will not be in the dumpfile - see the code example below where I generate a sqlfile following pretty much the process that you would have done. This is because FK belongs to the DDL for the child table not the parent.

    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    
    OPS$ORACLE@EMZA3>create table a (col1 number);
    
    Table created.
    
    OPS$ORACLE@EMZA3>alter table a add primary key (col1);
    
    Table altered.
    
    OPS$ORACLE@EMZA3>create table b (col1 number);
    
    Table created.
    
    OPS$ORACLE@EMZA3>alter table b add constraint x foreign key (col1) references a(col1);
    
    Table altered.
    
    OPS$ORACLE@EMZA3>
    
    EMZA3:[/oracle/11.2.0.1.2.DB/bin]# expdp / include=TABLE:\"=\'A\'\"
    
    Export: Release 11.2.0.3.0 - Production on Fri May 17 15:45:50 2013
    
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    Starting "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04":  /******** include=TABLE:"='A'"
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    . . exported "OPS$ORACLE"."A"                                0 KB       0 rows
    Master table "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04" successfully loaded/unloaded
    ******************************************************************************
    Dump file set for OPS$ORACLE.SYS_EXPORT_SCHEMA_04 is:
      /oracle/11.2.0.3.0.DB/rdbms/log/expdat.dmp
    Job "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04" successfully completed at 15:45:58
    
    Import: Release 11.2.0.3.0 - Production on Fri May 17 15:46:16 2013
    
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    Master table "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
    Starting "OPS$ORACLE"."SYS_SQL_FILE_FULL_01":  /******** sqlfile=a.sql
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Job "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully completed at 15:46:17
    
    -- CONNECT OPS$ORACLE
    ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
    -- new object type path: SCHEMA_EXPORT/TABLE/TABLE
    CREATE TABLE "OPS$ORACLE"."A"
       (    "COL1" NUMBER
       ) SEGMENT CREATION IMMEDIATE
      PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
     NOCOMPRESS LOGGING
      STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "SYSTEM" ;
    -- new object type path: SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    ALTER TABLE "OPS$ORACLE"."A" ADD PRIMARY KEY ("COL1")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
      STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "SYSTEM"  ENABLE;
    -- new object type path: SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    DECLARE I_N VARCHAR2(60);
      I_O VARCHAR2(60);
      NV VARCHAR2(1);
      c DBMS_METADATA.T_VAR_COLL;
      df varchar2(21) := 'YYYY-MM-DD:HH24:MI:SS';
     stmt varchar2(300) := ' INSERT INTO "SYS"."IMPDP_STATS" (type,version,flags,c1,c2,c3,c5,n1,n2,n3,n4,n5,n6,n7,n8,n9,n10,n11,n12,d1,cl1) VALUES (''I'',6,:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,NULL,:14,:15,NULL,:16,:17)';
    BEGIN
      DELETE FROM "SYS"."IMPDP_STATS";
      c(1) := 'COL1';
    
      DBMS_METADATA.GET_STAT_INDNAME('OPS$ORACLE','A',c,1,i_o,i_n);
      EXECUTE IMMEDIATE stmt USING 0,I_N,NV,NV,I_O,0,0,0,0,0,0,0,0,NV,NV,TO_DATE('2013-05-17 15:43:24',df),NV;
    
      DBMS_STATS.IMPORT_INDEX_STATS('"' || i_o || '"','"' || i_n || '"',NULL,'"IMPDP_STATS"',NULL,'"SYS"');
      DELETE FROM "SYS"."IMPDP_STATS";
    END;
    /
    

    Kind regards
    Harry

    http://dbaharrison.blogspot.com/

  • Expdp/impdp can be used to upgrade the database (instead of DBUA) R12

    Hi all

    (1) could as possible to convert the database to UTF8 value as part of the database
    upgrading process within the update Rel12? Any suggestion on how to do it? Any doc doc/procedure?

    (2) according to Rel 12 guide upgrade applications, I can follow "interoperability Notes: Oracle Applications 11i.
    with Oracle Database 11 g 2"doc to update my database. But this doc suggested using DBUA.
    Can I use expdp/impdp instead so I can convert the character at a time? All documentation
    that I can follow to use expdp/impdp to do the upgrade?

    (3) are there measures to specify/extra for Rel12 if I want to convert the characters of the base
    the value UTF8?

    Thanks in advance!

    (1) could as possible to convert the database to UTF8 value as part of the database
    upgrading process within the update Rel12? Any suggestion on how to do it? Any doc doc/procedure?

    I do not. You must upgrade, and then convert the characters or convert then upgrade - you can save an SR to confirm this with the support of the Oracle.

    (2) according to Rel 12 guide upgrade applications, I can follow "interoperability Notes: Oracle Applications 11i.
    with Oracle Database 11 g 2"doc to update my database. But this doc suggested using DBUA.
    Can I use expdp/impdp instead so I can convert the character at a time? All documentation
    that I can follow to use expdp/impdp to do the upgrade?

    Only DBUA is the way supported in Oracle EBS instance level. Please see this similar thread - Oracle EBS 11i Database Upgrade to 11 GR 2 - Upgrade method Support Manual is?

    (3) are there measures to specify/extra for Rel12 if I want to convert the characters of the base
    the value UTF8?

    Please see the documentation referenced in this thread - ADADMIN convert character set CUSTOM_TOP

    Thank you
    Hussein

  • IMP/exp expdp/impdp vs

    Hi all;
    which is better in performance the traditional exp/imp or expdp/impdp?

    Hello

    Oracle Data Pump is a more recent, rapid and flexible alternative to utilities 'exp' and 'imp' used in previous versions of Oracle. In addition to export and import of basic feature data pump provides a PL/SQL API and support for external tables

    If Oracle is recommended for all 10g databases and more use of Data Pump and pre 10g use old exp/imp :)

  • expdp, impdp in 12 c and 10g

    Hi all

    We have oracle 10g release 2 on windows environment. We want to take backup of this database using expdp and import it into oracle 12 c using impdp.

    How to do this?

    I use the following approach

    on 10g database (production):

    expdp complete system/password@orcl directory = y = TEST_DIR dumpfile = fulldb_12_jan_2016.dmp logfile = fulldb_12_jan_2016.log

    on 12 c database (test):

    Impdp complete system/password@orcl directory = y = TEST_DIR dumpfile = fulldb_12_jan_2016.dmp logfile = fulldb_12_jan_2016.log

    the problem is that we need to create storage space and data files. I already create their production before the impdp command.

    second problem is that the user schemas have quota on the tablespace. So I have to create users as well before the impdp? I don't know some of the passwords of the user schema.

    Please guide me

    Thank you.

    Hello

    Since you are doing the full expdp, users and tablesapces will be created for you.

    So no need to creaet them pre.

    Concerning tablespaces, impdp will try to create storage space with the same data files (eg - same location of files).

    So, if you do not have same directory on the destination structure it will fail.

    If the file structure is different you can pre create tablespaces and remap the data files using remap_datafile.

    Users will be created with the same passwords on the destination. It is so no worries.

  • EXPDP/IMPDP

    Hi all

    How do I create a sqlfile using impdp so to run it so that the procedures will be replaced by the more recent source expdp database.

    But the sqlfile a bug that include spaces in the sql statements, the cause of the error.

    For example:

    ALTER PROCEDURE "BATCH". "" SP_COUNT_DLS_DRCR ".

    COMPILE

    PLSQL_OPTIMIZE_LEVEL = 2

    PLSQL_CODE_TYPE = INTERPRETER

    PLSQL_DEBUG = FALSE PLSCOPE_SETTINGS = ' IDENTIFIERS: NO

    "REUSE THE TIMESTAMP SETTINGS ' 2014-01-10 10:12:26.

    /

    It has a space before RE-USE, where the script error on RE-USE statement

    Tricks treats on this subject?

    Thank you very much

    Hello

    Yep - thought I'm sorry that you had made only empty lines and that was the problem :-)

    The empty sqlfile option just what import tries to run text - so it just remains to create instead of replace.

    What you have to do is change the file and do a global replace of 'CREATE' with 'CREATE or REPLACE '...

    Still don't know why oracle did not add this as an option - it is one of the main things missing impdp - perhaps there is a technical reason why they do not want to implement?

    See you soon,.

    Rich

  • Table view is different in expdp/impdp?

    11.2 / Linux

    With exp/imp original: OWNER, TABLES and FULL modes are mutually exclusive. IE in original exp, if you want to make a backup of the EMP table in the scott schema, the following will be error
    exp / tables= EMP owner=SCOTT file=selected_tables.dmp log=selected_tables.log
    
    But this will end up in
    EXP-00026: conflicting modes specified
    EXP-00000: Export terminated unsuccessfully
    You must prefix the name of the owner (tables = < owner >. < table_name >) as below for the prescription above
    exp / tables= SCOTT.EMP file=selected_tables.dmp log=selected_tables.log
    But in impdp things work differently

    I provided a dump expdp of REG_TRACK_EVT of a CLS_USR_PROD scheme. I wanted to import that table in the schema of development CLS_USR_DEV. So, I did. But launch error ORA - 39166 TI kept as below
    $ impdp userid=\'/ as sysdba\' DIRECTORY=DPUMP_DIR DUMPFILE= reg_track_evt.dmp LOGFILE=reg_track_evt-imp.log tables=CLS_USR_DEV.REG_TRACK_EVT
    
    Import: Release 11.2.0.1.0 - Production on Mon Jul 30 18:24:32 2012
    
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-39002: invalid operation
    ORA-39166: Object CLS_USR_DEV.REG_TRACK_EVT was not found.
    To solve this problem, I had to remove the parameter TABLES (!) and use the REMAP_SCHEMA parameter
    impdp userid=\'/ as sysdba\' DIRECTORY=DPUMP_DIR DUMPFILE= reg_track_evt.dmp LOGFILE=reg_track_evt-imp.log REMAP_SCHEMA=CLS_USR_PROD:CLS_USR_DEV
    If this is the case, then what's the point of having the TABLES parameter in impdp?

    It's really the same thing regarding conflicting modes. Of what you have used in your import command:

    Impdp userid =------"/ ACE sysdba\ ' DIRECTORY = DUMPFILE DPUMP_DIR = LOGFILE reg_track_evt.dmp = reg_track_evt - imp.log REMAP_SCHEMA = CLS_USR_PROD:CLS_USR_DEV

    If you are remapping CLS_USR_PROD:CLS_USR_DEV then the table in the dumpfile is called CLS_USR_PROD. REG_TRACK_EVT and you tried to import CLS_USR_DEV. REG_TRACK_EVT and this table did not exist.

    What you specify on the tables = command is the table in the desired dumpfile. What you specify in the remapping of the color settings is where you want the table created, therefore, your first order was close, you need this:

    Impdp userid =------"/ ACE sysdba\ ' DIRECTORY = DUMPFILE DPUMP_DIR = LOGFILE reg_track_evt.dmp = reg_track_evt - imp.log tables = CLS_USR_PROD. REG_TRACK_EVT REMAP_SCHEMA = CLS_USR_PROD:CLS_USR_DEV

    I hope this helps.

    Dean

  • Error: ORA-02429 after using expdp/impdp

    Hi all
    I have a problem: when I save my 11g database and restore the database to another name I can't drop specific index.
    I get this error: ORA-02429: cannot delete the index used for the application of unique primary key.

    Note: I can easily delete this index before I run backup/restore procedures.
    Here are the steps.

    (1) creation of database D1
    (2) Save:
    expdp * schemas = D1 dumpfile = D1.dmp logfile = D1.log *.
    (3) create a new db user
    CREATE A USER IDENTIFIED BY D2 D2...
    (4) restoration of D1 to D2:
    Impdp D1/D1* remap_schema = D1: D2 dumpfile = D1.dmp *.
    (5) at this point, I'm getting another identical database D2. but for some reason, I can't drop some clues as described above.

    Any ideas what Miss me in the process. No impdp/expdp modifing my databes somehow?

    Thank you very much.

    Ok
    You can do the following

    1. disable all constraints.
    2 create a script to drop and re-create the index
    3 delete the index
    4 turn off the constraints of
    5 impdp CONTENT = DATA_ONLY
    6. run the script to create indexes
    7. activate the constraints
    8 activate triggers

    See this post
    http://download.Oracle.com/docs/CD/B19306_01/server.102/b14215/dp_import.htm#sthref273

  • How replicates all records from one table to another table without using expdp, impdp

    Hi I have two database in a database, that I have a table called t1 and another base data, I have the second table .so, my first table have records, that I need to transfer all records second T2 without use of expdp and impdp in every 5 min... what I do

    ??

    The best solution for this scenario is to use Oracle Golden Gate

    However, it requires a license, and you must pay for it.

    If this is not possible, you can create a job scheduler that uses a link DB in order to reproduce the recordings of the target database, but it will take the entire table to the target database and then INSERT AS SELECT truncate the data of the entire table every time that the job is running (because you can't follow only the records that have been changed or modified).

    In addition, read here on the replication of data using materialized views.

  • Database 11g expdp, impdp in a 10g database for

    Hello

    I would like to export an Oracle 11 g server to import it into an oracle database 10 g, which method is correct:

    1 - expdp (binary 10g)

    2 - expdp (binary 11 g) using the VERSION = 10.2 setting

    Thank you

    second method, you must specify the parameter version = 10.2 while using the commands the expdp and impdp.

    Please refer to this nice blog post: http://dbaoracledba.blogspot.com/2012/07/version-parameter-in-oracle-expdp-and.html

    Kind regards

  • expdp, impdp - redo generation?

    Friends...

    Oracle 11 g 2 SE

    OS: Linux

    I had tried to search in the documentation, but could not find an answer.

    I'm dropping some tables but prior to export for backup, db is in archivelog mode.

    Before you delete tables, trying to find if expdp job generates a lot of repeat or not given that the file system have 20 GB of free space.

    Issue.

    1. If I'm the scene expdp some tables with size 80 GB (size.. indexes on table not included), does expdp will generate a similar amount of redo or it will generate a lot of repeat?

    2 impdp job also generate a lot of repeat or only during the creation of index in impdp?

    Thank you

    Mike

    Export do NOT generate all over again. It's just dumping data in binary files. Very little again just to keep the export data table, rocking but very very minimal. IF I had to guess not more than 1 MB.

    Import generates recovery logs. Imports perform "Insert" statement behind the scenes and it does not generate a lot of newspapers of recovery. You can use sqlfile to create the definition of the index and to run manually with NOLOGGING which will reduce some redo. something like

    exclude the user/password Impdp = index - this will import everything except the index.

    Impdp username/password include sqlfile = index.sql index = - this will create a file named index.sql that will contain all the instructions for the index create

Maybe you are looking for