Impdp sqlfile parameter

Hello

OPERATING SYSTEM: HP - UX
Oracle10g

My production dba team sent me a sqlfile comprising 30K tables. They used sqlfile = parameter to get the DDL of the tables. I need to create the same DDL commands on my dev database.

My thought is rather than generate the ddl sqlfile orders and recreate all the ddl in the dev environment.
Why not to use the command to create structures of table below...

leader lines = abc.dmp user/pwd = n = aabb log patterns exp = abc.log

Would not be in the above given command creates all the structures of table (DDL)?

They are right for which one needs the sqlfile parameter take all DDLS and manually create?

Also pls inform me the importance of the sqlfile parameter in expdp/impdp...


Thank you
KSG

Published by: KSG on March 25, 2010 10:51

My understanding is... After you run the impdp command, a sqlfile will be generated which contains the dumpfile DDL commands relavent. I'm good with that?

Yes. It is somewhat similar to show = Yes command in imp

And I want to run the script to create the DDL...

Another is my doubt...

It's a hard way to create all the DDL by using the sql script file?

Why should I not go and take an export of all the table structure to help

expdp user/pwd file = dumpfile.dmp lines = n = abc log owner = dumpfile.log

and just import the dumpfile.dmp... Woundn can't create this all the DDL as long as same as the generation of the sqlfile script?

It depends on the rquirement. lets see you want to exclude a few tables full dump for the creation of the structure, sqlfile will come handy.
not sure what all the objects in your dump file are made up and you want to see the content (objects) of the dump file, instead of imporing to the database and check then you will be able to access it without actually importing. You can use lines = n only when you are sure and import the entire structure of the objects.

Anil Malkai

Tags: Database

Similar Questions

  • IMPDP SQLFILE: constraint_name multibyte characters leads to ORA-00972

    Hello

    I'm actually constraint_name made of multibyte characters (for example: constrain_name = "VALIDA_CONFIRMAÇÃO_PREÇO13").
    Of course this Bad Idea® is hereditary (I'm against all the stuff of fantasy as water in the names of files or directories on my filesystem...)

    The scenario is as follows:
    0 - I'm supposed to do a 'remap_schema '. Everything in the schema SCOTT should now be in a schema NEW_SCOTT.
    1 - the scott schema is exported via datapump
    2 - I'm an impdp with SQLFILE to remove all the DDL (table, packages, synonyms, etc...)
    3. I do a few sed on the sqlfile generated to replace each occurrence of SCOTT at NEW_SCOTT (this part is OK)
    4. once the modified sqlfile is executed, I'm doing an impdp with DATA_ONLY.

    (The scenario was envisioned in this thread: {message identifier: = 10628419})

    I get a few ORA-00972: identifier is too long to step 4, when you run the sqlfile.
    I see that some DDL for creating the constraint in the file (generated in step # 2) is read as follows:
    ALTER TABLE "TW_PRI"."B_TRANSC" ADD CONSTRAINT "VALIDA_CONFIRMAÃÃO_PREÃO14" CHECK ...
    Of course, the original name of the constraint with the cedilla and the tilde translates to something that is longer than 30 char/byte...

    As the original name is the Brazil, I also tried to add an EXPORT LANG = pt_BR. UTF-8 in my script before you run the impdp for sqlfile. It doesn't change anything. (the original $LANG is en_US. UTF-8)

    To create a unit test for this thread, I tried to reproduce it on my sandbox database... but, I don't have the issue. :-(

    The real system is a database of 4-nodes on Exadata (11.2.0.3) with NLS_CHARACTERSET = AL32UTF8.
    My sandbox database is a 11.2.0.1 (nonRAC) on RHEL4 AL32UTF8 also.

    The constraint_name argument is the same on the two system: I checked byte by byte using DUMP() on the constraint_name argument.

    Feel free to make light and/or seek clarification if necessary.


    Thanks in advance for those who will take their time to read all this.
    :-)

    --------
    I decided to register my testcase to my sandbox database, even if it does NOT reproduce the issue + (maybe I'm missing something obvious...) +.

    I use the following files.
    -createTable.sql:
    $ cat createTable.sql 
    drop table test purge;
    
    create table test
    (id integer,
    val varchar2(30));
    
    alter table test add constraint VALIDA_CONFIRMAÇÃO_PREÇO13 check (id<=10000000000);
    
    select constraint_name, lengthb(constraint_name) lb, lengthc(constraint_name) lc, dump(constraint_name) dmp
    from user_constraints where table_name='TEST';
    -expdpTest.sh:
    $ cat expdpTest.sh 
    expdp scott/tiger directory=scottdir dumpfile=testNonAscii.dmp tables=test
    -impdpTest.sh:
    $ cat impdpTest.sh 
    impdp scott/tiger directory=scottdir dumpfile=testNonAscii.dmp sqlfile=scottdir:test.sqlfile.sql tables=test
    It comes to the race:
    [oracle@Nicosa-oel test_nonAsciiColName]$ sqlplus scott/tiger
    
    SQL*Plus: Release 11.2.0.1.0 Production on Tue Feb 12 18:58:27 2013
    
    Copyright (c) 1982, 2009, Oracle.  All rights reserved.
    
    
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    
    SQL> @createTable
    
    Table dropped.
    
    
    Table created.
    
    
    Table altered.
    
    
    CONSTRAINT_NAME                  LB       LC
    ------------------------------ ---------- ----------
    DMP
    --------------------------------------------------------------------------------
    VALIDA_CONFIRMAÇÃO_PREÇO13             29         26
    Typ=1 Len=29: 86,65,76,73,68,65,95,67,79,78,70,73,82,77,65,195,135,195,131,79,95
    ,80,82,69,195,135,79,49,51
    
    
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    [oracle@Nicosa-oel test_nonAsciiColName]$ ./expdpTest.sh 
    
    Export: Release 11.2.0.1.0 - Production on Tue Feb 12 19:00:12 2013
    
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SCOTT"."SYS_EXPORT_TABLE_01":  scott/******** directory=scottdir dumpfile=testNonAscii.dmp tables=test 
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    . . exported "SCOTT"."TEST"                                  0 KB       0 rows
    Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    ******************************************************************************
    Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
      /home/oracle/scott_dir/testNonAscii.dmp
    Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at 19:00:22
    
    [oracle@Nicosa-oel test_nonAsciiColName]$ ./impdpTest.sh 
    
    Import: Release 11.2.0.1.0 - Production on Tue Feb 12 19:00:26 2013
    
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "SCOTT"."SYS_SQL_FILE_TABLE_01" successfully loaded/unloaded
    Starting "SCOTT"."SYS_SQL_FILE_TABLE_01":  scott/******** directory=scottdir dumpfile=testNonAscii.dmp sqlfile=scottdir:test.sqlfile.sql tables=test 
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Job "SCOTT"."SYS_SQL_FILE_TABLE_01" successfully completed at 19:00:32
    
    [oracle@Nicosa-oel test_nonAsciiColName]$ cat scott_dir/test.sqlfile.sql 
    -- CONNECT SCOTT
    ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
    -- new object type path: TABLE_EXPORT/TABLE/TABLE
    CREATE TABLE "SCOTT"."TEST" 
       (     "ID" NUMBER(*,0), 
         "VAL" VARCHAR2(30 BYTE)
       ) SEGMENT CREATION DEFERRED 
      PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 COMPRESS FOR OLTP LOGGING
      TABLESPACE "MYTBSCOMP" ;
     
    -- new object type path: TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    ALTER TABLE "SCOTT"."TEST" ADD CONSTRAINT "VALIDA_CONFIRMAÇÃO_PREÇO13" CHECK (id<=10000000000) ENABLE;
    I was expecting the cedilla and the tilde characters not displayed correctly...

    Published by: Nicosa on February 12, 2013 19:13

    Hello
    I had some strange effects if I didn't set the translation be utf8 in my ssh client (PuTTY in my case). Is - this somehow something that automatically happens in an environment, but not the other?

    In putty, this is the window-> translation and then choose utf8

    Also make sure that's not sed plague rather than impdp?

    See you soon,.
    Harry

  • Content IMPDP = metadata_only is very slow

    Oracle 11g Release 2

    A particular schema has 225 index. Some are the primary key, some are for foreign keys, and unique and non-unique indexes.

    It takes a whole day to load the metadata. I know that the indexes are the issues, when I check the STATUS THAT either he's working on an index object is displayed.

    Job: SYS_IMPORT_SCHEMA_01
      Owner: SYSTEM                         
      Operation: IMPORT                         
      Creator Privs: TRUE                           
      GUID: 10929CF9D613EDCCE050140A5700146A
      Start Time: Thursday, 05 March, 2015 13:54:34
      Mode: SCHEMA                         
      Instance: IMR1
      Max Parallelism: 1
      EXPORT Job Parameters:
      Parameter Name      Parameter Value:
         CLIENT_COMMAND        system/******** parfile=par_johnc.txt   
         INCLUDE_METADATA      1
      IMPORT Job Parameters:
         CLIENT_COMMAND        system/******** parfile=par_johnc_indexes.txt 
         INCLUDE_METADATA      1
      State: EXECUTING                      
      Bytes Processed: 0
      Current Parallelism: 1
      Job Error Count: 0
      Dump File: /du06/expdump/ora1_exp_johnc.dmp
      
    Worker 1 Status:
      Process Name: DW00
      State: EXECUTING                      
      Object Schema: JOHNC
      Object Name: TP_AP_LINE_PART_ITEMS_IDX1
      Object Type: SCHEMA_EXPORT/TABLE/INDEX/INDEX
      Completed Objects: 125
      Worker Parallelism: 1
    

    I loaded the metadata for this scheme with EXCLUDE = index. Now I'm trying to load the metadata for indexes, which INCLUDE = index.

    It seems that having created 125 index, so we have more than 100 to go.

    Since there is no data in the tables it should very quickly load the DDL.

    Here's my parfile:

    content=metadata_only
    dumpfile=ora1_exp_johnc.dmp
    logfile=or1_imp_johnc_indexes.log
    directory=exp_dmp
    schemas=JOHNC
    include=index
    

    Any ideas or suggestions on why or how to fix this?

    I used the sqlfile parameter and got this:

    ...
    ...
    Processing object type SCHEMA_EXPORT/XMLSCHEMA/XMLSCHEMA
    ORA-39126: Worker unexpected fatal error in KUPW$WORKER.PUT_SQL_FILE [XMLSCHEMA:"JOHNC"."ACES.xsd"]
    ORA-06502: PL/SQL: numeric or value error
    
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPW$WORKER", line 8164
    
    ----- PL/SQL Call Stack -----
      object      line  object
      handle    number  name
    0x347477cc0    19028  package body SYS.KUPW$WORKER
    0x347477cc0      8191  package body SYS.KUPW$WORKER
    0x347477cc0    16255  package body SYS.KUPW$WORKER
    0x347477cc0    15446  package body SYS.KUPW$WORKER
    0x347477cc0      3944  package body SYS.KUPW$WORKER
    0x347477cc0      4705  package body SYS.KUPW$WORKER
    0x347477cc0      8916  package body SYS.KUPW$WORKER
    0x347477cc0      1651  package body SYS.KUPW$WORKER
    0x347aa28d0        2  anonymous block
    
    Job "SYSTEM"."SYS_SQL_FILE_SCHEMA_02" stopped due to fatal error at 09:41:12
    

    RESOLUTION:

    expdp system / * parfile = par2.txt

    In him parfile I added:

    EXCLUDE = xmlschema

    After the export is complete, I ran IMPDP and successfully completed in 25 minutes.

  • Full expdp and impdp: one db to another

    Hello! Nice day!

    I would like to ask for any help with my problem.

    I would like to create a full database export and import them to a different database. This data base 2 are on separate computers.
    I try to use the expdp and impdp tool for this task. However, I experienced some problems during the import task.

    Here are the details of my problems:

    When I try to impdp the dump file, it seems that I was not able to import data and metadata to the user.

    Here is the exact command that I used during the import and export of task:

    Export (Server n ° 1)

    expdp user01 / * directory = ora3_dir full = y dumpfile=db_full%U.dmp filesize = parallel 2 logfile = db_full.log G = 4

    import (Server #2)
    Impdp user01 / * directory = ora3_dir dumpfile=db_full%U.dmp full = y log = db_full.log sqlfile = db_full.sql estimate = parallel blocks = 4

    Here is the log that was generated during the impdp runs:

    ;;;
    Import: Release 10.2.0.1.0 - 64 bit Production on Friday, 27 November 2009 17:41:07

    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    ;;;
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64 bit Production
    With partitioning, OLAP and Data Mining options
    Table main "PGDS. "' SYS_SQL_FILE_FULL_01 ' properly load/unloaded
    Departure "PGDS. ' SYS_SQL_FILE_FULL_01 ': PGDS / * directory = ora3_dir dumpfile=ssmpdb_full%U.dmp full = y logfile = ssmpdb_full.log sqlfile = ssmpdb_full.sql.
    Object DATABASE_EXPORT/TABLESPACE of treatment type
    Type of object DATABASE_EXPORT/PROFILE of treatment
    Treatment of DATABASE_EXPORT/SYS_USER/USER object type
    Treatment of DATABASE_EXPORT/SCHEMA/USER object type
    Type of object DATABASE_EXPORT/ROLE of treatment
    Treatment of type of object DATABASE_EXPORT, GRANT, SYSTEM_GRANT, PROC_SYSTEM_GRANT
    DATABASE_EXPORT/SCHEMA/SCHOLARSHIP/SYSTEM_GRANT processing object type
    DATABASE_EXPORT/SCHEMA/ROLE_GRANT processing object type
    DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE processing object type
    DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA processing object type
    DATABASE_EXPORT/RESOURCE_COST processing object type
    Treatment of DATABASE_EXPORT/SCHEMA/DB_LINK object type
    DATABASE_EXPORT/TRUSTED_DB_LINK processing object type
    DATABASE_EXPORT/PATTERN/SEQUENCE/SEQUENCE processing object type
    Treatment of type of object DATABASE_EXPORT/PATTERN/SEQUENCE/EXCHANGE/OWNER_GRANT/OBJECT_GRANT
    DATABASE_EXPORT/DIRECTORY/DIRECTORY of processing object type
    Treatment of type of object DATABASE_EXPORT/DIRECTORY/EXCHANGE/OWNER_GRANT/OBJECT_GRANT
    Treatment of type of object DATABASE_EXPORT/DIRECTORY/EXCHANGE/CROSS_SCHEMA/OBJECT_GRANT
    Type of object DATABASE_EXPORT/CONTEXT of transformation
    Object DATABASE_EXPORT/SCHEMA/PUBLIC_SYNONYM/SYNONYM of treatment type
    Object DATABASE_EXPORT/SCHEMA/SYNONYM of treatment type
    DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC processing object type
    Treatment of type of object DATABASE_EXPORT, SYSTEM_PROCOBJACT, PRE_SYSTEM_ACTIONS, PROCACT_SYSTEM
    Treatment of type of object DATABASE_EXPORT, SYSTEM_PROCOBJACT, POST_SYSTEM_ACTIONS, PROCACT_SYSTEM
    DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA processing object type
    DATABASE_EXPORT/SCHEMA/TABLE/TABLE processing object type
    Treatment of type of object DATABASE_EXPORT, SCHEMA, TABLE, PRE_TABLE_ACTION
    Treatment of type of object DATABASE_EXPORT/SCHEMA/TABLE/SCHOLARSHIP/OWNER_GRANT/OBJECT_GRANT
    Treatment of type of object DATABASE_EXPORT/SCHEMA/TABLE/SCHOLARSHIP/CROSS_SCHEMA/OBJECT_GRANT
    Treatment of type of object DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
    DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT processing object type
    Object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS of treatment
    DATABASE_EXPORT/SCHEMA/TABLE/COMMENT processing object type
    DATABASE_EXPORT/SCHEMA/PACKAGE/PACKAGE_SPEC processing object type
    Treatment of type of object DATABASE_EXPORT/SCHEMA/PACKAGE/SCHOLARSHIP/OWNER_GRANT/OBJECT_GRANT
    DATABASE_EXPORT/SCHEMA/FEATURE/FUNCTION processing object type
    Treatment of type of object DATABASE_EXPORT/SCHEMA/FUNCTION/GRANT/OWNER_GRANT/OBJECT_GRANT
    DATABASE_EXPORT/DIAGRAM/PROCEDURE/PROCEDURE processing object type
    Treatment of type of object DATABASE_EXPORT/DIAGRAM/PROCEDURE/GRANT/OWNER_GRANT/OBJECT_GRANT
    DATABASE_EXPORT/SCHEMA/FUNCTION/ALTER_FUNCTION processing object type
    DATABASE_EXPORT/DIAGRAM/PROCEDURE/ALTER_PROCEDURE processing object type
    DATABASE_EXPORT/SCHEMA/VIEW/VIEW processing object type
    Treatment of type of object DATABASE_EXPORT/SCHEMA/VIEW/SCHOLARSHIP/OWNER_GRANT/OBJECT_GRANT
    Treatment of type of object DATABASE_EXPORT/SCHEMA/VIEW/SCHOLARSHIP/CROSS_SCHEMA/OBJECT_GRANT
    DATABASE_EXPORT/SCHEMA/VIEW/COMMENT processing object type
    Type of object DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/REF_CONSTRAINT of treatment
    Type of object DATABASE_EXPORT/SCHEMA/PACKAGE_BODIES/PACKAGE/PACKAGE_BODY of treatment
    DATABASE_EXPORT/SCHEMA/TYPE/TYPE_BODY processing object type
    Treatment of type of object DATABASE_EXPORT/SCHEMA/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    Treatment of object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    Treatment of type of object DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS.
    Treatment of type of object DATABASE_EXPORT, SCHEMA, TABLE, POST_TABLE_ACTION
    DATABASE_EXPORT/SCHEMA/TABLE/TRIGGER processing object type
    DATABASE_EXPORT/SCHEMA/VIEW/TRIGGER processing object type
    Treatment of object DATABASE_EXPORT/PATTERN/JOB type
    Object DATABASE_EXPORT/SCHEMA/DIMENSION of treatment type
    Treatment of type of object DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCACT_INSTANCE
    Treatment of type of object DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCDEPOBJ
    Treatment of type of object DATABASE_EXPORT, DIAGRAM, POST_SCHEMA, PROCOBJ
    Treatment of type of object DATABASE_EXPORT, DIAGRAM, POST_SCHEMA, PROCACT_SCHEMA
    Job 'PGDS. "" SYS_SQL_FILE_FULL_01 "led to 17:43:09

    Thank you in advance.

    The good news is that your dumpfile seems fine. It has metadata and data.

    I looked through your impdp command and found your problem. You have added the sqlfile parameter. This indicates datapump to create a file that can be run from sqlplus. In fact, it is not objects. It also excludes data because data could get pretty ugly in a sqlfile.

    Here's your impdp command:

    Impdp user01 / * directory = ora3_dir dumpfile=db_full%U.dmp full = y log = db_full.log sqlfile = db_full.sql...

    Just remove the

    sqlFile = db_full. SQL

    After you run your first job, you will have a file named db_full.sql that has all the inside create statements. After you remove the sqlfile parameter, your import will work.

    Dean

  • Questions of the EXP/EXPDP

    Hello all;


    If want to import 5 GB .dmp file We can confirm this export dump file is valid?


    Suppose I got the dump file to import (file not found) and do not know which schema it has been caught, how would I know fromuser or patterns of information?


    Thank you.

    Hello, if you have a datapump file, you can use sqlfile parameter for the dump file to check without making a full import:

    Directory System/Manager Impdp = data pump_dir dumpfile = 5g .dmp sqlfile = 5g .sql

    in 5g.sql impdp exports all DDL.

    If you have a traditional export, you can use indexfile:

    IMP/Manager system queue = imp5.dmp indexfile = indexf.sql

    In indexf.sql you have too ddl of clusters, tables and indexes.

    In order to check dumpfile without a real imp or impdp.

    How can you kown if you have a traditional export or a datapump? Well, if you do a cat of dumpfile, you can see very different results in datapump or export.

    in the export file, you will see something like:

    head-$1 imp5.dmp

    EXPORT: V09.02.00

    in a datapump file you can not read anything.

    Really in a datapump file, you can identify something like:

    "SYS". "" SYS_EXPORT_SCHEMA_01 '... It is the name of the main table

    Concerning

  • table import DataPump, simple question

    Hello
    A junior DBA, I'm a bit confused.
    Suppose that a user wants to import a table with datapump which it exports from another database with different schemas and tablespaces (it has exported with expdp using tables = XX and I don't know the details..)...
    If I don't know the name of the table, should I ask these three questions 1) diagram 2) 3) version oracle tablespace?
    Of the documentation. I know the remapping of impdp capacities... But they are required when importing a table?
    Thanks in advance

    Hello

    Suppose that a user wants to import a table with datapump which it exports from another database with different schemas and tablespaces (He > exported with expdp using tables = XX and I don't know the details..)...
    If I don't know the name of the table, should I ask these three questions 1) diagram 2) 3) version oracle tablespace?

    You can get this information from the dumpfile if you want - just to make sure you get the right information. If you run your import command, but add:

    sqlFile = MySQL.SQL

    You can edit this sql file to see what is in the dumpfile. It won't display the data, but it will show all the metadata (tables, storage spaces, etc.). If you have not added sqlfile parameter is a .sql file that contains all the create statements that would have been executed.

    Of the documentation. I know the remapping of impdp capacities... But they are required when importing a table?
    Thanks in advance

    You must re-map anything, but if the dumpfile contains a table scott.emp then it will import this table scott.emp in. If you want to go in blake, then you must remap_schema. If it goes in the tablespace tbs1 and you want in tbs2, you need a remap_tablespace.

    Suppose that an end-user wants me to export a table spesific use datapump...
    Should I give him also the name of the repository where the exported table?

    It would be nice, but see above, you can get the name of the tablespace of sqlfile command during the import.

    I hope this helps.

    Dean

  • Company Export, Import Express (XE)

    I try to move a database of the laboratory of the Express business.
    I use datapump (expdp/impdp) at both ends of the migration.

    However, import (impdbp), I get a lot of mistakes.

    ORA-39083: Type THAT TABLESPACE could not create object error:
    ORA-01276: failed to add file [path]. File has a file name of Oracle managed files.

    I use a very simplistic form of orders. Too simple?

    [company] $ expdp------= filename.dmp dumpfile ' / ACE sysdba\ ' full = y
    [explicit] $ impdp------= filename.dmp dumpfile ' / as sysdba\.

    I'm looking around and have yet to find a good article on how
    to migrate smoothly to Express business data. Maybe I
    was not using the right keywords and lack what I'm looking for.

    Thank you!

    If you spend on computer to another, you probably need a remap_datafile on the impdp command. Let's say that your data source files are located in:

    /source_dir1/enterprise/.../dbs/*.dbf - or what ever you call data files

    and now you are trying to import, but the new system has a different directory tree:

    /target_dir1/Express/.../dbs/*.dbf

    When the import is performed, the create tablespace statement will have the original path in it. If you want to change where the data files, then you need a remap_datafile on the impdp command parameter. If you do this, you must complete the path, not only the data file.

    REMAP_DATAFILE=/source_dir1/enterprise/.../DBS/tbs1.dbf:/target_dir1/Express/.../DBS/tbs1.dbf

    If you change the operating system must put in the correct channel

    REMAP_DATAFILE = unix_path / filename:dos_path\filename

    You can check this by running this command:

    Impdp------= filename.dmp dumpfile ' / ACE sysdba\ ' include = tablespace sqlfile = tablespace.sql

    then edit tablespace.sql to look at the instructions in create tablespace.

    Dean

  • Don't know the name of tablespace from source to use in REMAP_TABLESPACE

    11.2.0.3/RHEL 5.4

    You asked me to restore a table since a dumpfile expdp use impdp. The source DB no longer exists, so I don't know the name of tablespace from source to use in REMAP_TABLESPACE. The source in the dumpfile table is a table partitioned using at least 5 tablespace for data and indexes. In the target, all the data and the indexes will use a tablespace. Is there a way I can restore this table without knowing source tablespace names?

    Hello
    You can use

    Impdp sqlfile = xxx.sql

    just, it generates a text file with all create commands and is not actually create anything in the database - you can then grep on all tablespace names

    If the export was a full
    You can probably even say

    Impdp sqlfile = xxx.sql include = tablespace

    (it might be tablespaces and tablespace - can't remember syntax)

    See you soon,.
    Harry

  • The schema DDL

    How to take the a ...packages/procedures/funcations/triggers only schema DDL

    DBMS_METADATA. GET_DDL
    http://www.optimaldba.com/scripts/extract_schema_ddl.SQL

    Or use
    happy expdp = metadata_only
    Use impdp SQLFILE = file.sql - which will give you the DOF.

    -André

  • How not to export audit DBMS_FGA strategies

    Hello

    I have a few paintings with DBMS_FGA policies on them (because of AuditVault) who calls a user-defined audit function. When the export of the whole scheme, the policies if exported too, but not the audit works, since he created in another schema. The problem is that when I import the schema on a different database, I get the following error when you try to access the data in these tables:
    ORA-28112: failed to execute policy function 
    ORA-06512: at line 4 
    28112. 00000 -  "failed to execute policy function" 
    *Cause:    The policy function has one or more error during execution. 
    *Action:   Check the trace file and correct the errors. 
    This is because the audit function does not exist in this database. The problem is resolved if I drop policies after importation with for example:
    begin
      for rec in (select * from dba_audit_policies where object_schema = 'ACC') loop
      dbms_fga.drop_policy(object_schema => rec.object_schema,
                             object_name => rec.object_name,
                             policy_name => rec.policy_name);  
      end loop;
    end;
    /  
    Anyone know if it is possible to somehow do not export these policies so that runing the PL/SQL preceding crashes after that importation would be unnecessary?

    Thanks in advance.

    Kind regards
    Swear

    Hello

    All you need to do is add the exclusion filter.

    exclude the = fga_policy

    to your export or import command line. If you don't want them in the dumpfile, and then add it to the command expdp. If you like in the dumpfile but do not want to import, and then add it to the impdp command. If you add it at the same time, you will get an error during the import, saying he has no fga_policy in the dumpfile.

    In fact, it may not be fga_policy, but checking for her. I don't have a database that I can look at now, but if you look in your export log file, you should be able to see the type of object that you want to exclude. You looking for something like:

    SCHEMA_EXPORT /... / * POLICY *.

    I guess this would be it. To check, you can use

    Impdp sqlfile = my_file.sql directory... dumpfile...

    This will generate a file called my_file.sql and you might look to find the ddl or pl/sql you want to eliminate. Once you find it, watch until you see a line like SCHEMA_EXPORT... Then add the end of this line to with an exclude = to your export order.

    I hope this helps.

    Dean

    Published by: Dean WINS on January 15, 2010 13:04

  • With the help of Data Pump for a copy of all the TABLSPACE DDL CREATE in a database

    I looked through the documentation for a way to export.

    I thought I could get the DDL using the donation below and them an IMPDP SQLFILES = yyy.sql

    Cat expddl.sql

    DIRECTORY = dpdata1
    JOB_NAME = EXP_JOB
    CONTENT = METADATA_ONLY
    FULL = Yes
    DUMPFILE = export_$ {ORACLE_SID} .dmp
    LOGFILE = export_$ {ORACLE_SID} .log


    I would have thought you could just do an INCLUDE on the tablespace but he doesn't like that.


    10.2.0.3

    Hello..

    I have not tried to get the tablespace ddl with expdp craete, but then I use DBMS_METADATA. This is the query.

    set pagesize 0
    set long 90000
    select DBMS_METADATA.GET_DDL ('TABLESPACE',tablespace_name) from dba_tablespaces;
    

    HTH
    Anand

  • Commit IMPDP = parameter y?

    Hi all

    Validation = y in old exp helped me a lot when I have little space allocated to cancel the TS.

    I can't believe that it is deprecated in EXPDP? Is this true?

    I'll take 100 GB size of discharge and I have only few as 120G disk space. How can I avoid the snapshot too old mistakes?


    Thank you very much

    Kinz

    You started your validation question = y, then you were asked "how much space I allocate to him when it will be imported? It will be multiplied by 2? or 3 times? ", and now you ask on insert and rollback. Please do not mix/mess issue, stay on one, so that we can try our best to give you the right answer.

    If you ask how to estimate the size of the dump file, and then use estimate_only = o with expdp command. This indicates that you estimated the size of the export dump. Size real dumpfile will be lower than those reported by the parameter estimate_only. But if you ask how long it will take, I think that its another issue for a new thread. But yes I do not know his answer.

    But your question is:

    How much space will allocate to him when it will be imported? It will be multiplied by 2? or 3 times?

    Then read below thread:
    Impdp: = estimated size diagram

    For the next issue, please open a new thread. Close this, if you have any other answer continue to question only.

    Concerning
    Girish Sharma

  • expdp/impdp metadata in clear text

    11.2.0.1 done a quick search on the Forum and on google and can't find what I'm looking for.

    Ran the following:
    expdp full=y content=metadata_only dumpfile=exp_full_metadata.dmp logfile=exp_full_metadata.log exclude=table exclude=index 
    Now, I want to enter the metadata in a kind of plain text file, so I can look at and copy sections. With IMP we could simply display the ddl on the screen and not import-is there a similar means to do this with IMPDP?

    for example we had with Imp: import settings 'Show': http://docs.oracle.com/cd/B10500_01/server.920/a96652/ch02.htm#1014036
    See the FACILITY*.
    Default: n
    When SHOW = y, the content of the export file is listed on the screen and not imported. The SQL statements contained in the export are displayed in the order in which to import run them.
    The SHOW parameter can be used only with the FULL = y

    Hello

    SQLFILE=file.sql
    

    is what you need

    DataPump it is also likely that the transformations you want to do can also be done from the command line rather than having to manipulate a text file - how you really want to change?

    See you soon,.
    Harry

    http://dbaharrison.blogspot.com

  • How to determine what storage spaces must be created before running IMPDP?

    Hi all

    I just discovered that all relevant tablespaces must be created before you can run the IMPDP.
    I'm curious to know if it is possible to understand the tablespace names from a dump file.

    Thank you very much.

    I just discovered that all relevant tablespaces must be created before you can run the IMPDP.
    I'm curious to know if it is possible to understand the tablespace names from a dump file.

    No need to have the same space, you can use the REMAP_TABLESPACE parameter.

    If the source nom_tablespace is "USERS", if your destination tablespace name is 'USERS_DEST' then give as

    impdp system /* dumpfile = file.dmp logfile = file.log directory = data_pump_dir remap_tabelspace =(USERS:USERS_DEST) *.

    or if you want to know what tablespace, there is an option called SQLFILE, so if you import into SQLFILE, you can see all of the DDL from your dumpfile.

    Impdp system / * directory logfile = file.log dumpfile = file.dmp = data_pump_dir sqlfile = ddl_dump.txt

  • Impdp index

    Hello guys,.
    I'll do the whole database 10 g 2 and impdp expdp 11 GR 2 database.
    Index of UserA in the source database are created in the tablespace indexed.
    and I would like to impdp this user in the database 11g target so when its index are created they are created not in indexed tablespaces but in
    tablespace indexB.
    Is there some order of remapping the colors for that?

    Dean Gagne says:
    Hello

    2 responses that stated:

    REMAP_TABLESPACE = USERA: indexed, WEAR: IndexB

    and

    Impdp UserA / spent REMAP_TABLESPACE= 'ADTB': 'tblB.

    are incorrect. There are no these parameters in Data Pump. The remap_tablespace only takes the ancient name of tablespace and a new name of tablespace. He is not yet qualified by a type of object, schema of the object or object name.

    The only thing you can do is:

    REMAP_TABLESPACE = old_tablespace:new_tablespace

    If you really need clues to be remapped to a different tablepace then the only thing you can do "with data pump" is to use the sql file and then edit the file. Here's what you can do:

    exclude the user/password Impdp = index - this will import everything except the index.

    Impdp username/password include sqlfile = index.sql index = - this will create a file named index.sql that will contain all the instructions for the index create

    Then you can use your favorite editor and change the tablespace names until you are happy, you should also search for '-connect ' and add the passwords that are missing. Then sqlplus just run the index.sql file.

    Thank you

    Dean

    Dean,

    Please read the Oracle document:

    REMAP_TABLESPACE

    Default: no

    Goal
    Remappe all objects selected import with persisted data in the source to be created in the target tablespace tablespace.

    Syntax and Description
    REMAP_TABLESPACE = source_tablespace:target_tablespace

    Several REMAP_TABLESPACE parameters can be specified, but no two may have the same source tablespace. The target schema must have the quota in the target tablespace.

    Note that the use of the REMAP_TABLESPACE parameter is the only way to remap a tablespace in Data Pump Import. It is a method easier and cleaner than that provided in the initial import utility. In original import, if you want to change the tablespace default for a user, you had to perform several steps, including export and deletion of users, creation of the user in the new tablespace and import the user from the dump file. This method has been subject to many restrictions (including the number of subdivisions of tablespace) sometimes resulting in the failure of certain DDL commands.

    However, the Data Pump Import of using the REMAP_TABLESPACE parameter method works for all objects, including the user, and it works independently of how tablespace paragraphs are in the DDL statement.

    --------

    You say that my "impdp UserA / spent REMAP_TABLESPACE = 'ADTB':"tblB"" is false, then say you "there are no these parameters in Data Pump!" "The only thing you can do is: remap_tablespace = old_tablespace:new_tablespace '-what is the difference? Is there a syntax error or something?

    Please provide details

    Thank you

    Grosbois

Maybe you are looking for

  • thumbnails in firefox are shown off the page

    Only in Firefox and Palemoon I get a condition where thumbnails are popping out of the web page (right) as shown in the attached picture. I don't get this problem in any other browser major i.e. Chrome/IE/Opera/Safari so I'm assuming that the issue i

  • HP ENVY 700-406: overheating CPU

    I have a HP Envy 700-406 with an A10 - 7700 k and a GTX 750ti graphics card.  When I am gaming, making a video, or any other task intensive CPU, HWmonitor reports time in the range of 95 c.  I know that it is extremely dangerous to operate a processo

  • Replace T420s with a new SSD HARD drive

    Hello I wonderd if it is possible to replace with a new SSD HARD drive, while putting the HARD drive in the Bay of Ultra-slot. And if so, what SSD model should I buy, and how do I transfer the operating system and the data for the SSD? This SSD model

  • HP Pavilion G6 1304sv: CPU - upgrade of microprocessor

    Hello, I have this laptop for a few years now... HP G6 museums 1304sv... and I would like to ask, if it is possible to be able to upgrade my CPU microprocessor for a better. I didn't get a lot of research on this, because I thought I could post a que

  • The NI9423 will work with Sound and Vibration VI

    Will be that the NI 9423 mounted on a chassis cRIO work with the NO audio and vibration of VI? I have an optical encoder of 10-30 V 100 pulses/revolution and would like to make angle field order analysis with her. is the NI9423 a card appropriate for