'Impdp' is blocked?

Hello

Env: Oracle 10 g 2 (10.2.0.5)

OS: RHEL 5.8 64-bit

I started the work of "impdp" about 17:40 yesterday as follows:

nohup impdp ID/'PWD'@DIS DIRECTORY parallel = dp_dump_dir = 6 DUMPFILE=EXPORT_%U.DMP NOLOGFILE = Y schema [schema list] = > /dbexports/exports/IMP.log &

And here is the last line of the file "log":

Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX

If I check the process impdp at the OS level, I see a running process:

Oracle 8583 12121 0 10:12 pts/3 00:00:00 grep impdp

Oracle 16932 12121 Aou08 0 pts/3 00:00:00 impdp DIRECTORY = data_pump_dir parallel = 6 DUMPFILE =...

When I ask the database, I see this:

Select * from dba_datapump_jobs;

OWNER_NAMEJOB_NAMEOPERATIONJOB_MODESTATEDEGREEATTACHED_SESSIONSDATAPUMP_SESSIONS
SYSTEMSYS_IMPORT_SCHEMA_01IMPORTSCHEMAEXECUTION OF618

Its been like 17 + I started this import. Here are the files imported by impdp work:

-rw - r - 1 oracle bCheminAdmin 50482532352 7 August 13:07 EXPORT_01.DMP

-rw - r - 1 oracle bCheminAdmin 34717945856 7 August 13:14 EXPORT_02.DMP

-rw - r - 1 oracle bCheminAdmin 28557950976 7 August 13:19 EXPORT_03.DMP

-rw - r - 1 oracle bCheminAdmin 36996407296 7 Aug 13:27 EXPORT_04.DMP

-rw - r - 1 oracle bCheminAdmin 37514956800 7 August 13:34 EXPORT_05.DMP

-rw - r - 1 oracle bCheminAdmin 30535053312 7 August 13:39 EXPORT_06.DMP

I know that the files are big. One of the imported tables contains some LOBs and the table is big enough.

Is there anything that should / can check to make sure that 'impdp' is still running and does not expect something?

Can I check what exactly the "impdp" do / done right now?

Please advise!

Best regards

Message was edited by: user130038

My other post was not a duplicate, but I posted another question concerning impdp suspension/resumption of employment. As the impdp job runs in the background, how can I suspend and resume later?

Are there errors in the database alert log? This may indicate a corruption LOB - pl run scripts in MOS Doc 787004.1 to check

HTH
Srini

Tags: Database

Similar Questions

  • expdp/impdp method in the database upgrade from 11.2.0.4 to 12.1.0.2 an error in logic of "command line"?

    Hi DBA:

    We are evaluating different methods of upgrading 11 GR 2 DB to 12 c (12.1.0.2) this year, it is very important for us to use the right method.  It was reported that when DB version was 9i and has been updated to 11g, there were some order of tables 'line' was wrong when execute SELECT statements in application (Java application) - at the moment I didn't if it is ' select * from... "or any specific part with a"where"clause, and what type of data was wrong (maybe sequence CLOB data, etc.).

    I think I know, ' exp/imp', "expdp/impdp" logical backup of DB methods.  Therefore, Oracle automatically import data and fix it in each table.  If no error in impdp connects after it's over, should not be any mistake, right?

    If the use of the method of 'creates a new data + expdp/impdp basis for schemas user' 11g 12 c port data, questions:

    1. this method will lead to erroneous sequence data when ordering?  If so, how to re - generate the sequence number?

    2. this method can bring CLOB, BLOB data and without any errors?  If error, how to detect and repair?

    I use the same method in the 10.2.0.5 to 11g port data, no problem.

    I'll keep this thread posted after getting more information from application.

    Thank you very much!

    Dania

    INSERT... SELECT, CREATE TABLE in the SELECT and Export-Import will all "re-organize" the table by assigning new blocks.

    In place--update (using DBUA or command line) don't re-create the user tables (although it DOES not rebuild the data dictionary tables).  Therefore, does not change the physical location of lines for user tables.  However, there is no guarantee that a further reason for INSERT, DELETE, UPDATE instructions does not change the physical location of ranks.  Nor is it a guarantee that a SELECT statement will return always stored in the same order unless an ORDER BY is specified explicitly.

    If you export the data while it is still used, you don't avoid update sequences in the source database.  Usually results in a mismatching number sequence between the source database and the new database (imported). In fact, even the numbers may be different, unless you use CONSISTENT or FLASHBACK_SCN.  (These two options in exp / expdp do NOT prevent update sequences).   The right way to manage sequences is to rebuild or increase them in data imported with the values in the data source, once the import is complete.

    Hemant K Collette

  • Content IMPDP = metadata_only is very slow

    Oracle 11g Release 2

    A particular schema has 225 index. Some are the primary key, some are for foreign keys, and unique and non-unique indexes.

    It takes a whole day to load the metadata. I know that the indexes are the issues, when I check the STATUS THAT either he's working on an index object is displayed.

    Job: SYS_IMPORT_SCHEMA_01
      Owner: SYSTEM                         
      Operation: IMPORT                         
      Creator Privs: TRUE                           
      GUID: 10929CF9D613EDCCE050140A5700146A
      Start Time: Thursday, 05 March, 2015 13:54:34
      Mode: SCHEMA                         
      Instance: IMR1
      Max Parallelism: 1
      EXPORT Job Parameters:
      Parameter Name      Parameter Value:
         CLIENT_COMMAND        system/******** parfile=par_johnc.txt   
         INCLUDE_METADATA      1
      IMPORT Job Parameters:
         CLIENT_COMMAND        system/******** parfile=par_johnc_indexes.txt 
         INCLUDE_METADATA      1
      State: EXECUTING                      
      Bytes Processed: 0
      Current Parallelism: 1
      Job Error Count: 0
      Dump File: /du06/expdump/ora1_exp_johnc.dmp
      
    Worker 1 Status:
      Process Name: DW00
      State: EXECUTING                      
      Object Schema: JOHNC
      Object Name: TP_AP_LINE_PART_ITEMS_IDX1
      Object Type: SCHEMA_EXPORT/TABLE/INDEX/INDEX
      Completed Objects: 125
      Worker Parallelism: 1
    

    I loaded the metadata for this scheme with EXCLUDE = index. Now I'm trying to load the metadata for indexes, which INCLUDE = index.

    It seems that having created 125 index, so we have more than 100 to go.

    Since there is no data in the tables it should very quickly load the DDL.

    Here's my parfile:

    content=metadata_only
    dumpfile=ora1_exp_johnc.dmp
    logfile=or1_imp_johnc_indexes.log
    directory=exp_dmp
    schemas=JOHNC
    include=index
    

    Any ideas or suggestions on why or how to fix this?

    I used the sqlfile parameter and got this:

    ...
    ...
    Processing object type SCHEMA_EXPORT/XMLSCHEMA/XMLSCHEMA
    ORA-39126: Worker unexpected fatal error in KUPW$WORKER.PUT_SQL_FILE [XMLSCHEMA:"JOHNC"."ACES.xsd"]
    ORA-06502: PL/SQL: numeric or value error
    
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPW$WORKER", line 8164
    
    ----- PL/SQL Call Stack -----
      object      line  object
      handle    number  name
    0x347477cc0    19028  package body SYS.KUPW$WORKER
    0x347477cc0      8191  package body SYS.KUPW$WORKER
    0x347477cc0    16255  package body SYS.KUPW$WORKER
    0x347477cc0    15446  package body SYS.KUPW$WORKER
    0x347477cc0      3944  package body SYS.KUPW$WORKER
    0x347477cc0      4705  package body SYS.KUPW$WORKER
    0x347477cc0      8916  package body SYS.KUPW$WORKER
    0x347477cc0      1651  package body SYS.KUPW$WORKER
    0x347aa28d0        2  anonymous block
    
    Job "SYSTEM"."SYS_SQL_FILE_SCHEMA_02" stopped due to fatal error at 09:41:12
    

    RESOLUTION:

    expdp system / * parfile = par2.txt

    In him parfile I added:

    EXCLUDE = xmlschema

    After the export is complete, I ran IMPDP and successfully completed in 25 minutes.

  • ORA-00904: "YYYY": invalid identifier using impdp

    Hello

    I am trying to import a part of a table to another table on the remote database using impdp:

    Impdp directory of centrumadmin/centrumadmin = network_link DUMP_LOG_DIRECTORY LOGFILE = backup_2014_01.log = REF2ISKNE = 'AUD$ _BACKUP' content TABLES = QUERY DATA_ONLY =------"WHERE \" NTIMESTAMP # "> to_date\ (January 2, 2014","DD-MM-YYYY '------") "------"; "

    But still get this error:

    Start "CENTRUMADMIN". "' SYS_IMPORT_TABLE_01 ': centrumadmin / * directory = network_link DUMP_LOG_DIRECTORY LOGFILE = backup_2014_01.log = REF2ISKNE TABLES = AUD$ happy _BACKUP = DATA_ONLY QUERY =" WHERE NTIMESTAMP # > to_date (02-2014, MM-YYYY) ""

    Current estimation using BLOCKS method...

    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

    Total estimation using BLOCKS method: 4,473 GB

    ORA-31693: Data Table object 'CENTRUMADMIN '. "" AUD$ _BACKUP "could not load/unload and being ignored because of the error:

    ORA-00904: "YYYY": invalid identifier

    Work "CENTRUMADMIN". "" SYS_IMPORT_TABLE_01 "completed with error (s 1) to Wed Feb 11 09:32:15 2015 elapsed 0 00:00:03

    Any ideas? If I change the date format YYYY-MM-DD or the other, always error is in the last part: ORA-00904: "DD": invalid identifier.

    Thank you.

    Honza

    Hi Mika,

    have you played around with some double 'or triple?

    as

    to_date\ ("02/01/2014","DD-MM-YYYY"------)-«;»

    concerning

    Kay

  • Impdp any table

    When I run this command:

    expdp tosandata/tosandata tables = (acact2act) Directory is dumpfile dumpdir is acact2actdmp.dmp

    A dump file and a log file is created with the content of the journal the following:

    ;;;

    Export: Release 11.2.0.1.0 - Production on Mon Dec 16 14:45:53 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

    ;;;

    Connected to: Oracle Database 11 g Enterprise Edition Release 11.2.0.1.0 - 64 bit Production

    With partitioning, OLAP, Data Mining and Real Application Testing options

    Start "TOSANDATA". "" SYS_EXPORT_TABLE_01 ': tosandata / * tables (acact2act) directory = dumpdir dumpfile = acact2actdmp.dmp

    Current estimation using BLOCKS method...

    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

    Total estimation using BLOCKS method: 128 KB

    Object type TABLE_EXPORT/TABLE/TABLE processing

    Processing object type TABLE_EXPORT/TABLE/SCHOLARSHIP/OWNER_GRANT/OBJECT_GRANT

    Object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS treatment

    . . exported "TOSANDATA." "' ACACT2ACT ' 45.03 lines 1313 KB

    Table main "TOSANDATA." "" SYS_EXPORT_TABLE_01 "properly load/unloaded

    ******************************************************************************

    Empty set of files for TOSANDATA. SYS_EXPORT_TABLE_01 is:

    F:\TOSAN\DATADUMP\ACACT2ACTDMP. DMP

    Work "TOSANDATA". "" SYS_EXPORT_TABLE_01 "conducted at 14:45:57

    When I want to import the dump with the following command file:

    impdp MYTESTUSER / 123456 tables= (acact2act) Directoryis dumpdir dumpfile is acact2actdmp.dmp

    The table does not matter and the following log is generated

    ;;;

    Import: Free 11.2.0.1.0 - Production on Mon Dec 16 14:54:05 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

    ;;;

    Connected to: Oracle Database 11 g Enterprise Edition Release 11.2.0.1.0 - 64 bit Production

    With partitioning, OLAP, Data Mining and Real Application Testing options

    Table main "MYTESTUSER." "' SYS_IMPORT_TABLE_01 ' properly load/unloaded

    ORA-39166: Object MYTESTUSER. ACACT2ACT was not found.

    Work "MYTESTUSER". "" SYS_IMPORT_TABLE_01 "carried out at 14:54:06

    Work "MYTESTUSER". "" SYS_IMPORT_TABLE_01 "carried out at 14:54:06

    But when I use the following commands:

    exp tosandata/tosandata tables = (acact2act) file =F:\Tosan\DataDump\expdptool.dmp

    imp MYTESTUSER / 123456 tables= (acact2act) file=F:\Tosan\DataDump\expdptool.dmp

    export and import is done properly

    Use the REMAP_SCHEMA clause instead of the parameter TABLES - explanation in the docs

    Data pump import

    Data pump import

    HTH
    Srini

  • What objects should be created before impdp?

    Hello

    Platform: RHEL 5.8 64-bit

    ENV: Source Server - Oracle 10.2.0.5 on ASM

    Server - Target Oracle 10.2.0.5 on ASM

    Specific patterns must be exported from the Source database and imported into the target database - expdp/impdp (datapump) will be used for export and import.

    My questions are:

    1. Objects must be created (roles, synonyms, storage spaces, etc.) initial before importing schemas?
    2. Is it possible to export these objects (roles, synonyms, storage spaces, etc.) during schema export (expdp) and have Datapump automatically create the database target when importing?
    3. I might have to repeat the steps expdp/impdp multiple times. I have to let the users/schemas and/or other objects in the database target for later impdp?

    Best regards

    Post edited by: user130038

    Hello

    tablespaces/roles can extract only when it is full = y, they are not valid for a snippet of schema. Full = there does not mean it will extract all had by default but as soon as you say will include = x export will have access to export anything, but because you said only include tablespace is all you have - see the example below:

    This will not work:

    [oracle@server]:EIANCAPP:[~]# expdp / schemas = system include = tablespace]

    Export: Release 11.2.0.2.0 - Production on Fri Jul 12 15:23:51 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

    Connected to: Oracle Database 11 g Enterprise Edition Release 11.2.0.2.0 - 64 bit Production
    With the partitioning option
    ORA-39001: invalid argument value
    ORA-39041: filter "INCLUDE" either identifies all types of objects or any type of object.

    This done and extracts only the ddl tablespace:

    [oracle@server]:EIANCAPP:[~]# expdp / include = complete tablespace = y]

    Export: Release 11.2.0.2.0 - Production on Fri Jul 12 15:24:10 2013

    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

    Connected to: Oracle Database 11 g Enterprise Edition Release 11.2.0.2.0 - 64 bit Production
    With the partitioning option
    Departure "OPS$ ORACLE." ' SYS_EXPORT_FULL_01 ': / * include = complete tablespace = y '.
    Current estimation using BLOCKS method...
    Treatment of DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA object type
    Total estimation using BLOCKS method: 0 KB
    Object DATABASE_EXPORT/TABLESPACE of treatment type
    Main table "OPS$ ORACLE." "' SYS_EXPORT_FULL_01 ' properly load/unloaded
    ******************************************************************************
    Empty the files together for OPS$ ORACLE. SYS_EXPORT_FULL_01 is:
    /Oracle/export/EIANCAPP/expdat.dmp
    Job ' OPS$ ORACLE. " "" SYS_EXPORT_FULL_01 "conducted at 15:24:17

    [oracle@server]:EIANCAPP:[~]#

    See you soon,.

    Harry

    http://dbaharrison.blogspot.com

  • expdp/impdp: constraints in the child-Parent relationship

    Hello

    I have a table parent1 and child1, child2 and chld3 tables have foreign key created on this parent1.

    Now, I want to do a delete on parent1. But as the number of records is very high on parent1, we go with expdp / impdp with option querry.

    I took the expdp parent1 level query. Now I dropped parent1 with option of cascade constraints and all foreign keys created by child1, 2 and 3 that parent1 references are automatically deleted.

    Now, if I have the impdp to the query of fire level dump file, are these foreign key constraints will be created automatically on child1, 2 and 3 or I need to manually recreate it?

    Kind regards

    ANU

    Hello
    The CF will not be in the dumpfile - see the code example below where I generate a sqlfile following pretty much the process that you would have done. This is because FK belongs to the DDL for the child table not the parent.

    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    
    OPS$ORACLE@EMZA3>create table a (col1 number);
    
    Table created.
    
    OPS$ORACLE@EMZA3>alter table a add primary key (col1);
    
    Table altered.
    
    OPS$ORACLE@EMZA3>create table b (col1 number);
    
    Table created.
    
    OPS$ORACLE@EMZA3>alter table b add constraint x foreign key (col1) references a(col1);
    
    Table altered.
    
    OPS$ORACLE@EMZA3>
    
    EMZA3:[/oracle/11.2.0.1.2.DB/bin]# expdp / include=TABLE:\"=\'A\'\"
    
    Export: Release 11.2.0.3.0 - Production on Fri May 17 15:45:50 2013
    
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    Starting "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04":  /******** include=TABLE:"='A'"
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    . . exported "OPS$ORACLE"."A"                                0 KB       0 rows
    Master table "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04" successfully loaded/unloaded
    ******************************************************************************
    Dump file set for OPS$ORACLE.SYS_EXPORT_SCHEMA_04 is:
      /oracle/11.2.0.3.0.DB/rdbms/log/expdat.dmp
    Job "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04" successfully completed at 15:45:58
    
    Import: Release 11.2.0.3.0 - Production on Fri May 17 15:46:16 2013
    
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    Master table "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
    Starting "OPS$ORACLE"."SYS_SQL_FILE_FULL_01":  /******** sqlfile=a.sql
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Job "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully completed at 15:46:17
    
    -- CONNECT OPS$ORACLE
    ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
    -- new object type path: SCHEMA_EXPORT/TABLE/TABLE
    CREATE TABLE "OPS$ORACLE"."A"
       (    "COL1" NUMBER
       ) SEGMENT CREATION IMMEDIATE
      PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
     NOCOMPRESS LOGGING
      STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "SYSTEM" ;
    -- new object type path: SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    ALTER TABLE "OPS$ORACLE"."A" ADD PRIMARY KEY ("COL1")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
      STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "SYSTEM"  ENABLE;
    -- new object type path: SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    DECLARE I_N VARCHAR2(60);
      I_O VARCHAR2(60);
      NV VARCHAR2(1);
      c DBMS_METADATA.T_VAR_COLL;
      df varchar2(21) := 'YYYY-MM-DD:HH24:MI:SS';
     stmt varchar2(300) := ' INSERT INTO "SYS"."IMPDP_STATS" (type,version,flags,c1,c2,c3,c5,n1,n2,n3,n4,n5,n6,n7,n8,n9,n10,n11,n12,d1,cl1) VALUES (''I'',6,:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,NULL,:14,:15,NULL,:16,:17)';
    BEGIN
      DELETE FROM "SYS"."IMPDP_STATS";
      c(1) := 'COL1';
    
      DBMS_METADATA.GET_STAT_INDNAME('OPS$ORACLE','A',c,1,i_o,i_n);
      EXECUTE IMMEDIATE stmt USING 0,I_N,NV,NV,I_O,0,0,0,0,0,0,0,0,NV,NV,TO_DATE('2013-05-17 15:43:24',df),NV;
    
      DBMS_STATS.IMPORT_INDEX_STATS('"' || i_o || '"','"' || i_n || '"',NULL,'"IMPDP_STATS"',NULL,'"SYS"');
      DELETE FROM "SYS"."IMPDP_STATS";
    END;
    /
    

    Kind regards
    Harry

    http://dbaharrison.blogspot.com/

  • IMPDP SQLFILE: constraint_name multibyte characters leads to ORA-00972

    Hello

    I'm actually constraint_name made of multibyte characters (for example: constrain_name = "VALIDA_CONFIRMAÇÃO_PREÇO13").
    Of course this Bad Idea® is hereditary (I'm against all the stuff of fantasy as water in the names of files or directories on my filesystem...)

    The scenario is as follows:
    0 - I'm supposed to do a 'remap_schema '. Everything in the schema SCOTT should now be in a schema NEW_SCOTT.
    1 - the scott schema is exported via datapump
    2 - I'm an impdp with SQLFILE to remove all the DDL (table, packages, synonyms, etc...)
    3. I do a few sed on the sqlfile generated to replace each occurrence of SCOTT at NEW_SCOTT (this part is OK)
    4. once the modified sqlfile is executed, I'm doing an impdp with DATA_ONLY.

    (The scenario was envisioned in this thread: {message identifier: = 10628419})

    I get a few ORA-00972: identifier is too long to step 4, when you run the sqlfile.
    I see that some DDL for creating the constraint in the file (generated in step # 2) is read as follows:
    ALTER TABLE "TW_PRI"."B_TRANSC" ADD CONSTRAINT "VALIDA_CONFIRMAÃÃO_PREÃO14" CHECK ...
    Of course, the original name of the constraint with the cedilla and the tilde translates to something that is longer than 30 char/byte...

    As the original name is the Brazil, I also tried to add an EXPORT LANG = pt_BR. UTF-8 in my script before you run the impdp for sqlfile. It doesn't change anything. (the original $LANG is en_US. UTF-8)

    To create a unit test for this thread, I tried to reproduce it on my sandbox database... but, I don't have the issue. :-(

    The real system is a database of 4-nodes on Exadata (11.2.0.3) with NLS_CHARACTERSET = AL32UTF8.
    My sandbox database is a 11.2.0.1 (nonRAC) on RHEL4 AL32UTF8 also.

    The constraint_name argument is the same on the two system: I checked byte by byte using DUMP() on the constraint_name argument.

    Feel free to make light and/or seek clarification if necessary.


    Thanks in advance for those who will take their time to read all this.
    :-)

    --------
    I decided to register my testcase to my sandbox database, even if it does NOT reproduce the issue + (maybe I'm missing something obvious...) +.

    I use the following files.
    -createTable.sql:
    $ cat createTable.sql 
    drop table test purge;
    
    create table test
    (id integer,
    val varchar2(30));
    
    alter table test add constraint VALIDA_CONFIRMAÇÃO_PREÇO13 check (id<=10000000000);
    
    select constraint_name, lengthb(constraint_name) lb, lengthc(constraint_name) lc, dump(constraint_name) dmp
    from user_constraints where table_name='TEST';
    -expdpTest.sh:
    $ cat expdpTest.sh 
    expdp scott/tiger directory=scottdir dumpfile=testNonAscii.dmp tables=test
    -impdpTest.sh:
    $ cat impdpTest.sh 
    impdp scott/tiger directory=scottdir dumpfile=testNonAscii.dmp sqlfile=scottdir:test.sqlfile.sql tables=test
    It comes to the race:
    [oracle@Nicosa-oel test_nonAsciiColName]$ sqlplus scott/tiger
    
    SQL*Plus: Release 11.2.0.1.0 Production on Tue Feb 12 18:58:27 2013
    
    Copyright (c) 1982, 2009, Oracle.  All rights reserved.
    
    
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    
    SQL> @createTable
    
    Table dropped.
    
    
    Table created.
    
    
    Table altered.
    
    
    CONSTRAINT_NAME                  LB       LC
    ------------------------------ ---------- ----------
    DMP
    --------------------------------------------------------------------------------
    VALIDA_CONFIRMAÇÃO_PREÇO13             29         26
    Typ=1 Len=29: 86,65,76,73,68,65,95,67,79,78,70,73,82,77,65,195,135,195,131,79,95
    ,80,82,69,195,135,79,49,51
    
    
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    [oracle@Nicosa-oel test_nonAsciiColName]$ ./expdpTest.sh 
    
    Export: Release 11.2.0.1.0 - Production on Tue Feb 12 19:00:12 2013
    
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SCOTT"."SYS_EXPORT_TABLE_01":  scott/******** directory=scottdir dumpfile=testNonAscii.dmp tables=test 
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    . . exported "SCOTT"."TEST"                                  0 KB       0 rows
    Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    ******************************************************************************
    Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
      /home/oracle/scott_dir/testNonAscii.dmp
    Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at 19:00:22
    
    [oracle@Nicosa-oel test_nonAsciiColName]$ ./impdpTest.sh 
    
    Import: Release 11.2.0.1.0 - Production on Tue Feb 12 19:00:26 2013
    
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "SCOTT"."SYS_SQL_FILE_TABLE_01" successfully loaded/unloaded
    Starting "SCOTT"."SYS_SQL_FILE_TABLE_01":  scott/******** directory=scottdir dumpfile=testNonAscii.dmp sqlfile=scottdir:test.sqlfile.sql tables=test 
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Job "SCOTT"."SYS_SQL_FILE_TABLE_01" successfully completed at 19:00:32
    
    [oracle@Nicosa-oel test_nonAsciiColName]$ cat scott_dir/test.sqlfile.sql 
    -- CONNECT SCOTT
    ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
    -- new object type path: TABLE_EXPORT/TABLE/TABLE
    CREATE TABLE "SCOTT"."TEST" 
       (     "ID" NUMBER(*,0), 
         "VAL" VARCHAR2(30 BYTE)
       ) SEGMENT CREATION DEFERRED 
      PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 COMPRESS FOR OLTP LOGGING
      TABLESPACE "MYTBSCOMP" ;
     
    -- new object type path: TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    ALTER TABLE "SCOTT"."TEST" ADD CONSTRAINT "VALIDA_CONFIRMAÇÃO_PREÇO13" CHECK (id<=10000000000) ENABLE;
    I was expecting the cedilla and the tilde characters not displayed correctly...

    Published by: Nicosa on February 12, 2013 19:13

    Hello
    I had some strange effects if I didn't set the translation be utf8 in my ssh client (PuTTY in my case). Is - this somehow something that automatically happens in an environment, but not the other?

    In putty, this is the window-> translation and then choose utf8

    Also make sure that's not sed plague rather than impdp?

    See you soon,.
    Harry

  • migration by expdp/impdp db to AIX and Linux. Impdp errors, help.

    Hi, I did complete expdp (11 GR 1 material) and complete impdp (11 GR 2) migrate db from aix to linux.

    I have the following questions in mind:
    (1) I checked the imp papers and found a lot of mistakes. I think that most can be ignored, but I just want to be double sure before that I release the new db migrated for use. Is there a better way to avoid these mistakes?


    ORA-31684: SEQUENCE object type: 'SYSTEM '. "" MVIEW$ _ADVSEQ_GENERIC ' already exists
    ORA-31684: SEQUENCE object type: 'SYSTEM '. "" MVIEW$ _ADVSEQ_ID ' already exists
    ORA-31684: SEQUENCE object type: 'SYSTEM '. "" REPCAT$ _EXCEPTIONS_S ' already exists
    ORA-31684: SEQUENCE object type: 'SYSTEM '. "" REPCAT$ _FLAVORS_S ' already exists
    ORA-31684: SEQUENCE object type: 'SYSTEM '. "" REPCAT$ _FLAVOR_NAME_S ' already exists
    ORA-31684: SEQUENCE object type: 'SYSTEM '. "" REPCAT$ _REFRESH_TEMPLATES_S ' already exists
    ......
    Table 'SYSTEM '. "" REPCAT$ _AUDIT_COLUMN "exists. Data will be added to the existing
    table but all dependent metadata will be ignored because the table_exists_action a
    ppend
    Table 'SYSTEM '. "" REPCAT$ _FLAVOR_OBJECTS "exists. Data will be added to the existin
    Table g but all dependent metadata will be ignored due to table_exists_action of
    Add
    Table 'SYSTEM '. "" REPCAT$ _TEMPLATE_STATUS "exists. Data will be added to existi
    ng table but all dependent metadata will be ignored due to table_exists_action o
    f add
    .......
    Treatment of type of object DATABASE_EXPORT/SCHEMA/VIEW/SCHOLARSHIP/OWNER_GRANT/OBJECT_GRAN
    T
    ORA-39111: Object Type dependent OBJECT_GRANT: 'SYSTEM' ignored, base object type
    VIEW: 'SYSTEM '. "' MVIEW_WORKLOAD ' already exists
    ORA-39111: Object Type dependent OBJECT_GRANT: 'SYSTEM' ignored, base object type
    VIEW: 'SYSTEM '. "' MVIEW_FILTER ' already exists
    ORA-39111: Object Type dependent OBJECT_GRANT: 'SYSTEM' ignored, base object type
    VIEW: 'SYSTEM '. "' MVIEW_LOG ' already exists
    ORA-39111: Object Type dependent OBJECT_GRANT: 'SYSTEM' ignored, base object type
    VIEW: 'SYSTEM '. "' MVIEW_FILTERINSTANCE ' already exists
    ORA-39111: Object Type dependent OBJECT_GRANT: 'SYSTEM' ignored, base object type
    VIEW: 'SYSTEM '. "' MVIEW_RECOMMENDATIONS ' already exists
    ORA-39111: Object Type dependent OBJECT_GRANT: 'SYSTEM' ignored, base object type
    VIEW: 'SYSTEM '. "' MVIEW_EVALUATIONS ' already exists
    ...............

    . . imported 'SYSTEM '. "" REPCAT$ _TEMPLATE_REFGROUPS ' 5,015 KB 0 rows
    . . imported 'SYSTEM '. "" REPCAT$ _TEMPLATE_SITES ' 5,359 KB 0 rows
    ORA-31693: Data Table object 'SYSTEM '. "" REPCAT$ _TEMPLATE_STATUS ' failed to load/u
    nLoad and being ignored because of the error:
    ORA-00001: unique constraint (SYSTEM. REPCAT$ _TEMPLATE_STATUS_PK) violated
    . . imported 'SYSTEM '. "" REPCAT$ _TEMPLATE_TARGETS ' 4,937 KB 0 rows
    ORA-31693: Data Table object 'SYSTEM '. "" REPCAT$ _TEMPLATE_TYPES ' failed to load/United Nations
    load and being ignored because of the error:
    ORA-00001: unique constraint (SYSTEM. REPCAT$ _TEMPLATE_TYPES_PK) violated
    . . imported 'SYSTEM '. "" REPCAT$ _USER_AUTHORIZATIONS ' 4,773 KB 0 rows
    . . imported 'SYSTEM '. "' SQLPLUS_PRODUCT_PROFILE ' 5,140 KB 0 rows
    . . imported 'SYSTEM '. "' TOAD_PLAN_SQL ' 4,859 KB 0 rows


    (2) in addition, I've seen a few new users 11 GR 2 is not in 11R1, I should not worry update these users, right?

    Thank you in advance.

    You receive the ORA-31684: SEQUENCE object type: 'SYSTEM '. "" MVIEW$ _ADVSEQ_GENERIC "already exists, because the SYSTEM object already exists. You can ignore this error.

    You can also view links below

    Cross platform Transportable Tablespace with RMAN
    http://www.oracleracexpert.com/2009/08/transportable-tablespace-export-import.html

    Import/export transportable tablespace on the same endian platforms
    http://www.oracleracexpert.com/2009/10/cross-platform-transportable-tablespace.html

    Hope this helps,

    Concerning
    http://www.oracleracexpert.com
    Block of Corruption and recovery
    http://www.oracleracexpert.com/2009/08/block-corruption-and-recovery.html]
    Redo log corruption and recovery
    http://www.oracleracexpert.com/2009/08/redo-log-corruption-and-recovery.html]

  • an impdp problem

    Hello

    11 GR 2, linux86_64

    I can made a successful exp (expdp) of the source db using "sysdba".

    When you use impdp, see the error:


    Connected to: Oracle Database 11 g Enterprise Edition Release 11.2.0.1.0 - 64 bit Production
    With the options of partitioning and Automatic Storage Management
    ORA-39002: invalid operation
    ORA-31694: table main 'SYS '. "' IMP_L2 ' failed to load/unload
    ORA-31644: couldn't position for the block number 226430 in the dump file ' / tmp/dump/l2/l2_2.dmp '.
    ORA-19501: error reading file ' / tmp/dump/l2/l2_2.dmp ', block number 1 (block size = 4096)
    ORA-27070: async read/write failed
    Linux-x86_64 error: 22: invalid argument
    Additional information: 3

    Looks like it's the block corrupition, but made an another exp/imp, received the same error, what is the problem?

    CC: parfile of impdp

    USERID = "/ as sysdba".
    DIRECTORY = data_pump_dir1
    LOGFILE = data_pump_dir1:import_l2.log
    JOB_NAME = imp_l2
    DUMPFILE = l2.dmp
    REMAP_SCHEMA = source_schema:target_schema
    REMAP_TABLESPACE = (source_1:target_1,
    source_1:target_2)
    TABLES =(source_shcema.table1,)
    source_shcema.table2)
    )


    Thank you

    Hello

    (1) you have permission to write the /tmp/dump/l2/l2.dmp directory?
    (2) where the/tmp mounted?
    3) try with another local directory and check that you are getting the same error

    Kind regards
    Rakesh

  • Inactive workers in IMPDP with parallel option

    Hello gurus,

    I took an export datapump to a partitioned table 29GB with the following parameters and was extremely quick (< 10 minutes):

    parallel = 4
    = 500MB file size
    Directory = PUMP_DIR
    estimate statistics =
    dumpfile=exp%U.dmp
    REUSE_DUMPFILES = y
    logfile = export.log
    tables = user1.t1

    Export product 4 parallel workers who were active all the time, so why the high speed.

    However when I tried to take an import datapump on the same database on an empty table (different schema), the performance was very poor (55 minutes):

    parallel = 4
    Directory = PUMP_DIR
    dumpfile=exp%U.dmp
    logfile = Import.log
    tables = user1.t1
    remap_schema = user1:user2
    TABLE_EXISTS_ACTION = add

    I noticed that parallel workers have been slowed down all the time (not applicable the degree of parallelism, I used) and importation of all was serialized.

    Can someone give me an idea why parallel workers were slowed during the IMPDP?


    [r00tb0x | http://www.r00tb0x.com]

    You see, I'll assume that you do a single importation of the data, or at the very least, your tables already exist and you import partitioned tables. If that's true, then you hit a situation that we know. Here's what happens:

    This is true for tables partitioned and sous-partitionnee and only if the Data Pump task that loads the data have not also created the table. The last part of the previous sentence is what makes this real. If you run a Data Pump task that simply creates the tables, and then run another task import which loads the data, even when they come from the same dumpfile, you will hit this situation. The situation is:

    When the task Data Pump that load data into a table partitioned or sous-partitionnée has not created the table, and then Data Pump cannot be sure
    the partitioning key is the same. For this reason, when loading the Data Pump data, it comes out a table lock and it blocks any other parallel to load in this table workers. If the task Data Pump created the table, then only a lock partition or subpartition is underwritten and other workers are free to subscribe locks on different partitions/subpartitions in the same table.

    I suppose that you see, is that all workers are trying to load data into the same table, but different partitions and workers holds the table lock. This may block other workers.

    There is a 'fix' for this, but there is a minimum difficulty. Data pump sill cannot load into several partitions, remember to remove the table lock, but instead, only 1 partitoin/subpartitoin will be set to 1. You will not see the parallel max used in these cases, but you will not see other workers waiting for an exclusive lock on the same table.

    I do not remember the number of patch and I do not remember what version it went in, but Oracle Support should be able to help with that.

    If the same Data Pump task creates the tables, or if these tables are not partitioned/sous-partitionnée, then I have not heard of this issue.

    Thank you

    Dean

  • Possibility for impdp clean without "drop user cascade?

    First time poster here.

    Unlike a lot of readers here, our Organization the Group DBA super duper which are very sensitive to what we do to the servers. While it is understandable that what they do is also a headache for us, also.

    The installer: We have 2 servers on 11g, a PROD and a QA. The DBA of takes a database PROD discharge via expdp and we give it to restore on QA. The plan is to drop and re-create the user before running impdp.

    The problem is that we do not have the privileges 'connect sysdba virtue' so that we can delete and re-create the user, and we won't get to have.

    It comes as good if:

    (1) we drop all the tables, procedures, sequences, and etc, then run impdp

    (2) there is an option to run impdp which will force overwrite the database?

    (3) there are other alternatives to complete updating of the schema?

    Thanks in advance for your advice.

    David

    (1) it is just as good. Maybe better, because the privileges of the user, quotas, etc. remain in place.

    (2) not sure offhand. see the documentation.

    (3) truncate everything and start the import operation. This is useful as if your structure is the same and simply refresh the data.

    Here is a block that I use to exterminate all the user objects, if I don't want to drop or create the user. He manages the types of objects in my application. If you have other types of objects, you may need to add sections for them.

    -- this will drop all objects for a user.
    -- it assumes you are logged in AS the user owning the objects
    
    BEGIN
       FOR x_rec in (select view_name from user_views) LOOP
          execute immediate 'drop view '||x_rec.view_name;
       END LOOP;
    
       FOR x_rec in (select mview_name from user_mviews) LOOP
          execute immediate 'drop materialized view '||x_rec.mview_name;
       END LOOP;
    
       FOR x_rec in (select table_name from user_tables) LOOP
          execute immediate 'drop table '||x_rec.table_name||' cascade constraints';
       END LOOP;
    
       FOR x_rec in (select synonym_name from user_synonyms) LOOP
          execute immediate 'drop synonym '||x_rec.synonym_name;
       END LOOP;
    
       FOR x_rec in (select sequence_name from user_sequences) LOOP
          execute immediate 'drop sequence '||x_rec.sequence_name;
       END LOOP;
    
       FOR x_rec in (select index_name from user_indexes) LOOP
          execute immediate 'drop index '||x_rec.index_name;
       END LOOP;
    
    END;
    /
    
    purge recyclebin;
    
  • With the help of expdp/impdp in pump data

    Hi all

    I am newbie in oracle database through the use of data import and export oracle data pump. I use Oracle 10 g 2 on Windows 7

    After you create the directory object 'test_dir' and the granting of read, write privilege to user scott, I connect like scott to the database and create the table 'test' with two rows.

    Then I run expdp command prompt as follows:

    C:\users\administrator > expdp scott/tiger@orcl tables = happy test = all = TEST_DIR dumpfile = test.dmp expdp_test.log = logfile directory

    Export: Release 10.2.0.3.0 - Production on Monday, June 13, 2011 20:20:54

    Copyright (c) 2003, 2005, Oracle. All rights reserved.

    Connected to: Oracle Database 10 g Enterprise Edition release 10.2.0.3.0 - production
    tion
    With partitioning, OLAP and Data Mining options
    Departure 'SCOTT '. "SYS_EXPORT_TABLE_01": scott/***@orcl tables = test content.
    = all = TEST_DIR dumpfile = test.dmp expdp_test.log = logfile directory
    Current estimation using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    Object type TABLE_EXPORT/TABLE/TABLE processing
    . . "exported"SCOTT" TEST' 0Ko 0 rows
    Table main 'SCOTT '. "" SYS_EXPORT_TABLE_01 "properly load/unloaded
    ******************************************************************************
    Empty the files together for SCOTT. SYS_EXPORT_TABLE_01 is:
    D:\DAMP\TEST. DMP
    Job 'SCOTT '. "" SYS_EXPORT_TABLE_01 "conducted at 20:21:02

    My question is why data pump seem to export the table 'test' without lines (that is to say the line: exported 'SCOTT'.) ("' TEST ' 0Ko 0 lines)? How can I do it with the associated export lines?


    I dropped the table test, then I ran the command impdp as follows:

    C:\users\administrator > impdp scott/tiger tables = content test = all = TEST_DIR dumpfile = Test.dmp impdp_test.log = logfile directory

    Import: Release 10.2.0.3.0 - Production on Monday, June 13, 2011 20:23:18

    Copyright (c) 2003, 2005, Oracle. All rights reserved.

    Connected to: Oracle Database 10 g Enterprise Edition release 10.2.0.3.0 - production
    tion
    With partitioning, OLAP and Data Mining options
    Table main 'SCOTT '. "' SYS_IMPORT_TABLE_01 ' properly load/unloaded
    Departure 'SCOTT '. "" SYS_IMPORT_TABLE_01 ": scott / * tables = happy test = all
    Directory = TEST_DIR dumpfile = Test.dmp logfile = impdp_test.log
    Object type TABLE_EXPORT/TABLE/TABLE processing
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    . . imported 'SCOTT '. "' TEST ' 0Ko 0 rows
    Job 'SCOTT '. "" SYS_IMPORT_TABLE_01 "carried out at 20:23:21


    Then, after selection * test. No rows returned

    Please someone with an idea to this topic... What I expected after operation data in table view pump it could export and import table with the data if I'm not mistaken.

    Concerning
    Sadik

    Sadik says:
    He had two rows

    Export is disagreeing with you.
    have you COMMITTED after INSERT & before export?

  • Need help on impdp via network_link no system, no application tables

    Hello

    I'm trying to impdp via non-system network_link, not application tables and saw the following errors:

    Current estimation using BLOCKS method...
    Treatment of DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA object type
    ORA-39126: worker unexpected fatal worker error of $ MAIN. [GET_TABLE_DATA_OBJECTS]
    ORA-02041: customer database did not begin a transaction

    ORA-06512: at "SYS." DBMS_SYS_ERROR', line 86
    ORA-06512: at "SYS." "MAIN$ WORKER", line 8353

    -PL/SQL call stack-
    the line object
    serial number of handle
    body 0x629ba128 19208 package SYS. MAIN$ WORKER
    body 0x629ba128 8385 package SYS. MAIN$ WORKER
    body 0x629ba128 12748 package SYS. MAIN$ WORKER
    body 0x629ba128 4742 package SYS. MAIN$ WORKER
    body 0x629ba128 9110 package SYS. MAIN$ WORKER
    0x629ba128 body 1688 package SYS. MAIN$ WORKER
    0x6c54d1c8 anonymous block 2

    Work "SYSADM". "' IMP_USERS ' stopped because of the mistake at 12:01:39

    Here's my parfile:

    NETWORK_LINK = DBLINK
    FULL = Y
    PARALLEL = 8
    EXCLUDE = PATTERN: 'IN ('SYS', 'SYSTEM', 'OUTLN', 'PS', 'SYSADM', 'GENERAL', 'PEOPLE', 'DIVE', 'ORACLE_OCM', 'CSMIG', 'PSMON', 'PERFSTAT' '
    JOB_NAME = IMP_USERS

    Please let me know if you need more details than those provided above. the source is a GR 11, 2 on RHEL5 data 10204 on HP_UX traction.

    Published by: beef on February 10, 2011 08:35

    It seems that it might be a bug.

    Bug 10115400 - ORA-39126 when using expdp/impdp with network_link [10115400.8 ID]

    You can submit a Service request with the support of the Oracle.

  • Full expdp and impdp: one db to another

    Hello! Nice day!

    I would like to ask for any help with my problem.

    I would like to create a full database export and import them to a different database. This data base 2 are on separate computers.
    I try to use the expdp and impdp tool for this task. However, I experienced some problems during the import task.

    Here are the details of my problems:

    When I try to impdp the dump file, it seems that I was not able to import data and metadata to the user.

    Here is the exact command that I used during the import and export of task:

    Export (Server n ° 1)

    expdp user01 / * directory = ora3_dir full = y dumpfile=db_full%U.dmp filesize = parallel 2 logfile = db_full.log G = 4

    import (Server #2)
    Impdp user01 / * directory = ora3_dir dumpfile=db_full%U.dmp full = y log = db_full.log sqlfile = db_full.sql estimate = parallel blocks = 4

    Here is the log that was generated during the impdp runs:

    ;;;
    Import: Release 10.2.0.1.0 - 64 bit Production on Friday, 27 November 2009 17:41:07

    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    ;;;
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64 bit Production
    With partitioning, OLAP and Data Mining options
    Table main "PGDS. "' SYS_SQL_FILE_FULL_01 ' properly load/unloaded
    Departure "PGDS. ' SYS_SQL_FILE_FULL_01 ': PGDS / * directory = ora3_dir dumpfile=ssmpdb_full%U.dmp full = y logfile = ssmpdb_full.log sqlfile = ssmpdb_full.sql.
    Object DATABASE_EXPORT/TABLESPACE of treatment type
    Type of object DATABASE_EXPORT/PROFILE of treatment
    Treatment of DATABASE_EXPORT/SYS_USER/USER object type
    Treatment of DATABASE_EXPORT/SCHEMA/USER object type
    Type of object DATABASE_EXPORT/ROLE of treatment
    Treatment of type of object DATABASE_EXPORT, GRANT, SYSTEM_GRANT, PROC_SYSTEM_GRANT
    DATABASE_EXPORT/SCHEMA/SCHOLARSHIP/SYSTEM_GRANT processing object type
    DATABASE_EXPORT/SCHEMA/ROLE_GRANT processing object type
    DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE processing object type
    DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA processing object type
    DATABASE_EXPORT/RESOURCE_COST processing object type
    Treatment of DATABASE_EXPORT/SCHEMA/DB_LINK object type
    DATABASE_EXPORT/TRUSTED_DB_LINK processing object type
    DATABASE_EXPORT/PATTERN/SEQUENCE/SEQUENCE processing object type
    Treatment of type of object DATABASE_EXPORT/PATTERN/SEQUENCE/EXCHANGE/OWNER_GRANT/OBJECT_GRANT
    DATABASE_EXPORT/DIRECTORY/DIRECTORY of processing object type
    Treatment of type of object DATABASE_EXPORT/DIRECTORY/EXCHANGE/OWNER_GRANT/OBJECT_GRANT
    Treatment of type of object DATABASE_EXPORT/DIRECTORY/EXCHANGE/CROSS_SCHEMA/OBJECT_GRANT
    Type of object DATABASE_EXPORT/CONTEXT of transformation
    Object DATABASE_EXPORT/SCHEMA/PUBLIC_SYNONYM/SYNONYM of treatment type
    Object DATABASE_EXPORT/SCHEMA/SYNONYM of treatment type
    DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC processing object type
    Treatment of type of object DATABASE_EXPORT, SYSTEM_PROCOBJACT, PRE_SYSTEM_ACTIONS, PROCACT_SYSTEM
    Treatment of type of object DATABASE_EXPORT, SYSTEM_PROCOBJACT, POST_SYSTEM_ACTIONS, PROCACT_SYSTEM
    DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA processing object type
    DATABASE_EXPORT/SCHEMA/TABLE/TABLE processing object type
    Treatment of type of object DATABASE_EXPORT, SCHEMA, TABLE, PRE_TABLE_ACTION
    Treatment of type of object DATABASE_EXPORT/SCHEMA/TABLE/SCHOLARSHIP/OWNER_GRANT/OBJECT_GRANT
    Treatment of type of object DATABASE_EXPORT/SCHEMA/TABLE/SCHOLARSHIP/CROSS_SCHEMA/OBJECT_GRANT
    Treatment of type of object DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
    DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT processing object type
    Object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS of treatment
    DATABASE_EXPORT/SCHEMA/TABLE/COMMENT processing object type
    DATABASE_EXPORT/SCHEMA/PACKAGE/PACKAGE_SPEC processing object type
    Treatment of type of object DATABASE_EXPORT/SCHEMA/PACKAGE/SCHOLARSHIP/OWNER_GRANT/OBJECT_GRANT
    DATABASE_EXPORT/SCHEMA/FEATURE/FUNCTION processing object type
    Treatment of type of object DATABASE_EXPORT/SCHEMA/FUNCTION/GRANT/OWNER_GRANT/OBJECT_GRANT
    DATABASE_EXPORT/DIAGRAM/PROCEDURE/PROCEDURE processing object type
    Treatment of type of object DATABASE_EXPORT/DIAGRAM/PROCEDURE/GRANT/OWNER_GRANT/OBJECT_GRANT
    DATABASE_EXPORT/SCHEMA/FUNCTION/ALTER_FUNCTION processing object type
    DATABASE_EXPORT/DIAGRAM/PROCEDURE/ALTER_PROCEDURE processing object type
    DATABASE_EXPORT/SCHEMA/VIEW/VIEW processing object type
    Treatment of type of object DATABASE_EXPORT/SCHEMA/VIEW/SCHOLARSHIP/OWNER_GRANT/OBJECT_GRANT
    Treatment of type of object DATABASE_EXPORT/SCHEMA/VIEW/SCHOLARSHIP/CROSS_SCHEMA/OBJECT_GRANT
    DATABASE_EXPORT/SCHEMA/VIEW/COMMENT processing object type
    Type of object DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/REF_CONSTRAINT of treatment
    Type of object DATABASE_EXPORT/SCHEMA/PACKAGE_BODIES/PACKAGE/PACKAGE_BODY of treatment
    DATABASE_EXPORT/SCHEMA/TYPE/TYPE_BODY processing object type
    Treatment of type of object DATABASE_EXPORT/SCHEMA/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    Treatment of object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    Treatment of type of object DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS.
    Treatment of type of object DATABASE_EXPORT, SCHEMA, TABLE, POST_TABLE_ACTION
    DATABASE_EXPORT/SCHEMA/TABLE/TRIGGER processing object type
    DATABASE_EXPORT/SCHEMA/VIEW/TRIGGER processing object type
    Treatment of object DATABASE_EXPORT/PATTERN/JOB type
    Object DATABASE_EXPORT/SCHEMA/DIMENSION of treatment type
    Treatment of type of object DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCACT_INSTANCE
    Treatment of type of object DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCDEPOBJ
    Treatment of type of object DATABASE_EXPORT, DIAGRAM, POST_SCHEMA, PROCOBJ
    Treatment of type of object DATABASE_EXPORT, DIAGRAM, POST_SCHEMA, PROCACT_SCHEMA
    Job 'PGDS. "" SYS_SQL_FILE_FULL_01 "led to 17:43:09

    Thank you in advance.

    The good news is that your dumpfile seems fine. It has metadata and data.

    I looked through your impdp command and found your problem. You have added the sqlfile parameter. This indicates datapump to create a file that can be run from sqlplus. In fact, it is not objects. It also excludes data because data could get pretty ugly in a sqlfile.

    Here's your impdp command:

    Impdp user01 / * directory = ora3_dir dumpfile=db_full%U.dmp full = y log = db_full.log sqlfile = db_full.sql...

    Just remove the

    sqlFile = db_full. SQL

    After you run your first job, you will have a file named db_full.sql that has all the inside create statements. After you remove the sqlfile parameter, your import will work.

    Dean

Maybe you are looking for