replaces data pump really exp/imp?

Hi guys,.

Ive read some people saying that we should use data instead of exp/IMP pump But as far as I can see, if I have a database behind a firewall somewhere else and can't connect to this database directly and need to get some acrposs data, then data pump is useless to me and I can't just exp and imp data.

OracleGuy777 wrote:
cannot be resolved my Data Pump. Yet people are saying exp and imp will become obsolete and no longer supported. If Oracle will do anything for people who have the same problem as me?


The sky is not falling. He will not fall for a few years yet. Deprecated! = not supported. Deprecated = not recommended.

The time will come when exp/imp will be not supported. But I don't think that to happen for a few versions of the database, particularly because it does not all areas of the problem.

Tags: Database

Similar Questions

  • Options of data compared to exp/imp pump

    I use data for the first time pump and I export 1 discount table in another case where it has been accidentally truncated. I don't see the options that I'm used to exp/imp data pump. For example, to ignore create errors if the structure of the table exist already (ingnore =) or to not load the index (index =). The press review of the Oracle I have has a reading from now is not even talking about these issue, or how Data Pump can handle them. Please let me know if you have experience with this and if it is a question.
    I ran datapump to export the table with data only and meta_data. I couldn't read the meta data. I expected little readable create instructions, but is binary and xml as statements. Not yet clear how I could use it. Now, I try to take my only export and loading data in the 2nd instance. The table already exists but is truncated, and I don't know how it will load the index. Please bring me up-to-date if you can.

    Thank you
    Tony

    Hello

    You should read the oracle documentation and he got very good examples

    http://download.Oracle.com/docs/CD/B19306_01/server.102/b14215/dp_overview.htm#sthref13

    Parameter mapping pump export of data to the Original export utility

    http://download.Oracle.com/docs/CD/B19306_01/server.102/b14215/dp_export.htm#sthref181

    Concerning

  • exp/imp without data creates huge empty db!

    Hello
    I have a db of 8.1.7.4 on windows 2000 400GO
    I wanted to exp / imp, but I don't want the data, just the structure and code

    So I did

    exp x/y@db LEADER = D:\exp_database.dmp LINES = COMPATIBLE N = Y FULL = Y CONSTRAINTS = Y GRANTS = INDEX Y = Y

    IMP x/y@db2 FILE = D:\exp_database.dmp ROWS = N FULL = Y CONSTRAINTS = Y GRANTS = INDEX Y = Y Feedback = 1000


    the 400GO exported to the file of 7 M, what's good
    but when I import the 7 M, it creates a VACUUM 160 GB database!
    I see that 'some' of the data files are full size, even it's empty!
    no reason for this, is the structure take 40% of the db?




    Thank you

    The 'COMPRESS' export setting determines how the CREATE TABLE statement must be created.

    By default, 'Y' 'COMPRESS' that is, a CREATE TABLE statement is generated with a LEADING period allocated size as large as the table (all extensions of the combined table) - without worrying about whether all lines exist in the table or all lines are exported or all lines are imported.

    You export with 'COMPRESS = N. Which generates instructions CREATE TABLE with INITIAL as large as the INITIAL of the exported table report. However, if the source table has an INITIAL definition of, say, 100 M, again, regardless of the number of rows present/imported/exported, the import will be CREATE TABLE with INITIAL of 100M. In these cases, you need to generate the CREATE TABLE statement and create the tables manually.

    Hemant K Collette
    http://hemantoracledba.blogspot.com

    Published by: Hemant K Collette on July 29, 2010 14:00

  • Differences between Data Pump and always by inheritance, import and export

    Hi all

    I work as a junior in my organization dba and I saw that my organization still uses legacy import and export utility instead of Oracle Data Pump.

    I want to convence my manager to change the existing deal with Oracle Data Pump, I have a meeting with them to keep my points and convence them for Oracle utility pump data.

    I have a week very convencing power but I won't miss to discover myself, I can't work myself and really a need with differences of strength against import and export, it would be really appreciated if someone can put strong points against Oracle Data pump on legacy import and export.

    Thank you

    Cabbage

    Hello

    a other people have already said the main advantage of datapump is performance - it is not just a little more fast exp/imp it is massively faster (especially when combined with parallel).

    It is also more flexible (once much more) - it will even create users with exports level schema which imp can't do for you (and it is very annoying that he could never).

    It is reusable

    It has an api plsql

    It supports all types of objects and new features (exp is not - and that alone is probably reason to spend)

    There even a 'legacy' at 11.2 mode where most of your old exp parameter file will still work with him - just change exp expdp and impdp print.

    The main obstacle to the transition to datapump seems to be all "what do you mean I have to create a directory so that it works", well who and where is my dumpfile why can't it be on my local machine. These are minor things to go well beyond.

    I suggest do you some sort of demo with real data of one of your large databases - do a full exp and a full expdp with parallel and show them the runtimes for them to compare...

    See you soon,.

    Rich

  • Upgrade of database using Data Pump

    Hello

    I'm moving my database to a Windows 2003 server to Windows 2007 server. At the same time I get to level this database from 10g to 11 g 2 (11.2.0.3).
    so, I use the export / import method of upgrade (via Data Pump not the old exp/imp).

    I have exported successfully by the source database and created the empty shell ready to take import database. However, I have a couple of queries

    Q1. with regard to all objects in the source database SYSTEM. How they will import as target the new database already has a SYSTEM tablespace
    I guess I need to use the TABLE_EXISTS_ACTION option for import. However should I set this Append, SKIP, REPLACE or TRUNCATE - which is best?

    Q2. I intend to slightly modify the directory structure on the new database server - it would be preferable to pre-create the tablespaces or leave that to import but use the REMAPPER DATAFILE option - what is all the world experience to know what is the best way to go? Once again if I pre-create the tablespaces, how should I inform the import to ignore the creation of the tablespaces

    Q3. These 2 databases are on the same network, so in theorey instead of a manual export, copy of the dump for the new server file, then import, I could use a link to network for import. I was wondering where it all stupid this method using the explicit export dump file?

    Thank you
    Jim

    Jim,

    Q1. with regard to all objects in the source database SYSTEM. How they will import as target the new database already has a SYSTEM tablespace
    I guess I need to use the TABLE_EXISTS_ACTION option for import. However should I set this Append, SKIP, REPLACE or TRUNCATE - which is best?

    What if all you have is the basic database and nothing created, then you can do the full = y. In fact, it is probably what you want. The system tablespace will be there then when Data Pump attempts to create, it will fail just as the create statement. Anything else will fail. In most cases, your system tables will already be there, and that's ok too. If you import schema view, you will miss out on some of the other things.

    Q2. I intend to slightly modify the directory structure on the new database server - it would be better to pre-create the tablespaces or leave cela import but use the REMAPPING > DATAFILE option - what is everyones experience to know what is the best way to go? Once again if I pre-create the tablespaces, how should I inform the import to ignore the creation of the tablespaces

    If the directory structure is different (which they usually are), then there is no easier way. You can run impdp but with sqlfile and you can tell - include = tablespace. This will give you all orders of tablespace to create in a txt file and you can edit the text file to change what you want to change. You can tell datapump to ignore the creation of tablespace using--exclude = tablespace

    Q3. These 2 databases are on the same network, so in theorey instead of a manual export, copy of the dump for the new server file, then import, I could use a link to network for import. I > I was just wondering where it all with this method using the explicit export dump file?

    The only con might be if you have a slow network. This will make it slower, but if you need to copy the dumpfile on the same network, then you will always see the same basic traffic. The benefits are that we should not have additional disk space. Here's how I look at it.

    1. you have XX GB for the source database
    2. you are looking for the source dumpfile YY GB
    3. you must YY GB for the dumpfile target that you copy
    4. you have XX GB for the target database.

    By network get rid if YY * 2 GB for the dumpfiles.

    Dean

  • import schema data using oracle data pump

    Hi all

    I want to import a schema on a single server using IMPDP. But on this server, the user does not exists.is it is possible to import all the objects that break including the creation of the schema. In exp/imp, it may be possible. But in data pump I can't do the same. Its giving error that the user does not exist. For your reference, I'll give you the code below the import.

    Impdp system directory = DIR_DUMP logfile = expdp_tsadmin_tcgadmin_11182013_2_2.log

    dumpfile = expdp_tsadmin_tcgadmin_11182013_2.dmp PATTERNS = TS_ADMIN version = 10.1.0.2.0

    Pls help and thanks in advance.

    Thank you

    Piku

    Hello

    All errors are due to the fact that the tablespace that ts_data does not exist, everything else is hit because of this.

    create tablespace ts_data datafile 'xxxxxx' size xM;

    then try again

    See you soon,.

    Harry

  • Data Pump 11g Network Import

    I have a need to perform a network import a 10.2.0.4 DataPump mode database on my old server (HP - UX 11.11) to my new 11.2.0.3 database on a new server (HP - UX 11.31). What I would REALLY like to do is to import directly from my database physical standby (running in mode READ_ONLY while I do importation) rather than having calm my database of production for a couple of hours while I do the import from there.

    What I want to know is if the importation of network mode Data Pump running on 11.2.0.3 the new server creates a task Data Pump to extract in the old database as part of this importation of direct network link. If so, I won't be able to use the physical Standby as the source of my import because Data Pump will not be able to create the main table in the old database. I can't find anything in any Oracle documentation on the use of a physical Standby as a source. I know that I can't do a Data Pump export regularly this database, but I would really like to know if anyone has experience doing this.

    Any comments would be greatly appreciated.

    Bad news, Harry - it worked for me on a standard database open in READ ONLY mode. Not sure what is different between your environment and mine, but there must be something... Read only database is a 10.2.0.4 database running on a box of HP PA-RISC under HP - UX 11.11. The target database runs under 11.2.0.3, on a box of HP Itanium under HP - UX 11.31. The user, I am connected to the database target IMP_FULL_DATABASE privs, and this is the user id used for the DB_LINK and also the same user id on the source database (which, of course, follows!). This user also has the privs permission. My file by looks like this:

    TABLES = AC_LIAB_ %
    NETWORK_LINK = ARCH_LINK
    DIRECTORY = DATA_PUMP_DIR
    JOB_NAME = AMIBASE_IMPDP_ARCHDB
    LOGFILE = DATA_PUMP_DIR:base_impdp_archdb.log
    REMAP_TABLESPACE = ARCHIVE_BEFORE2003:H_DATA
    REMAP_TABLESPACE = ARCHIVE_2003:H_DATA
    REMAP_TABLESPACE = ARCHIVE_2004:H_DATA
    REMAP_TABLESPACE = ARCHIVE_2005:H_DATA
    REMAP_TABLESPACE = ARCHIVE_2006:H_DATA
    REMAP_TABLESPACE = ARCHIVE_2007:H_DATA
    REMAP_TABLESPACE = ARCHIVE_2008:H_DATA
    REMAP_TABLESPACE = ARCHIVE_INDEXES:H_INDEXES
    REUSE_DATAFILES = NO
    SKIP_UNUSABLE_INDEXES = Y
    TABLE_EXISTS_ACTION = REPLACE

  • exp / imp question to support

    Hello @ all.
    It exist in some other forums tools exp and imp will be launched out of the product. Does anyone know if this is true and when it will arrive?
    Thank you very much.

    It has not been started, you can still use exp/imp in version 11 GR 2.

    In Oracle 10 g is a better tool called "Data Pump" for exports or imports

  • Oracle Data Pump

    oracle11g oracle introduced the new feature called oracle data dump. What is the difference in comparing utility imp/exp? can someone explain how to use this new feature

    for example, by using the user scott to export some tables in my diagram.

    Data pump is very fast, parallel importation is made in 11g. exp/imp sometimes fails due to problems of space on the server. but by using method of parallel import is performed. parallel importation is nothing but imported when 1 row have exported the corresponding row in the table, thus saving disk space. With Data Pump Import, a single stream of data load is about 15 to 45 times faster than the initial import.

  • Wise schema exp/imp or complete database exp/imp better in cross platform database 10g migration

    Hello

    When you perform a migration of database platform (big-endian to little endian) of 10 grams, which should be preferred to a database export import full or can do without schema using exp/imp?

    The data base is about 3 TB and a server Oracle ebs with many diagrams custom in it.

    Your suggestions are welcome.

    For EBS. export/import of individual schemas is not supported, because of the dependence between the different schemes of EBS - only complete export/import are supported.

    What are the exact versions of EBS and database?

  • migration from 10g to 12 c using the data pump in

    Hi, while I used the data pump at the level of the schema before, I'm relatively new to the full database import.

    We are trying a full database migration to 10.2.0.4 to 12 c using the complete method of database data pump over db link.

    the DBA has indicated to avoid move SYSAUX and SYSTEM objects. but initially during the documentation review, it appeared that these objects are not exported since the TRANSPORTABLE given target = system NEVER. If anyone can confirm this? done import and export log refers to the objects I thought would not:

    ...

    19:41:11.684 23 FEBRUARY 15:Estimated TABLE_DATA 3718 objects in 77 seconds

    19:41:12.450 23 February 15: total estimation using BLOCKS method: 52,93 GB

    19:41:14.058 23 February 15: object DATABASE_EXPORT/TABLESPACE of treatment type

    20:10:33.185 23 February 15: ORA-31684: TABLESPACE object type: 'UNDOTBS1' already exists

    20:10:33.185 23 February 15: ORA-31684: TABLESPACE object type: 'SYSAUX' already exists

    20:10:33.185 23 February 15: ORA-31684: TABLESPACE object type: 'TEMP' already exists

    20:10:33.185 23 February 15: ORA-31684: TABLESPACE object type: 'USERS' existing

    20:10:33.200 23 FEBRUARY 15:96 objects TABLESPACE finished in 1759 seconds

    20:10:33.208 23 February 15: treatment of type of object DATABASE_EXPORT/PROFILE

    20:10:33.445 23 FEBRUARY 15:7 PROFILE items finished in 1 seconds

    20:10:33.453 23 February 15: treatment of DATABASE_EXPORT/SYS_USER/USER object type

    20:10:33.842 23 FEBRUARY 15:1 USER objects ended in 0 seconds

    20:10:33.852 23 February 15: treatment of DATABASE_EXPORT/SCHEMA/USER object type

    20:10:52.368 23 February 15: ORA-31684: USER object type: 'OUTLN' already exists

    20:10:52.368 23 February 15: ORA-31684: USER object type: 'ANONYMOUS' already exists

    20:10:52.368 23 February 15: ORA-31684: USER object type: 'OLAPSYS' already exists

    20:10:52.368 23 February 15: ORA-31684: USER object type: 'MDDATA' already exists

    20:10:52.368 23 February 15: ORA-31684: USER object type: 'SCOTT' already exists

    20:10:52.368 23 February 15: ORA-31684: USER object type: 'LLTEST' already exists

    20:10:52.372 23 FEBRUARY 15:Finished objects USER 1140 in 19 seconds

    20:10:52.375 23 February 15: object DATABASE_EXPORT/ROLE of treatment type

    20:10:55.255 23 February 15: ORA-31684: object ROLE type: 'SELECT_CATALOG_ROLE' already exists

    20:10:55.255 23 February 15: ORA-31684: object ROLE type: 'EXECUTE_CATALOG_ROLE' already exists

    20:10:55.255 23 February 15: ORA-31684: object ROLE type: 'DELETE_CATALOG_ROLE' already exists

    20:10:55.256 23 February 15: ORA-31684: object ROLE type: 'RECOVERY_CATALOG_OWNER' already exists

    ...

    the most insight.

    The schema SYS, CTXSYS and MDSYS ORDSYS are not exported using exp/expdp

    DOC - ID: Note: 228482.1

    I guess that he has already installed a 12 c software and created an itseems database - so when you have imported you have this "already exists."

    Every time the database is created and the software installed by default system, sys, sysaux will be created.

  • ODI LKM Oracle for Oracle Data Pump question

    Hi all

    I have a weird problem, ODI.

    I associate myself with per_all_people_f, fnd_user to load the w_user_ds using Oracle Data Integrator. The used LKM is LKM Oracle for Oracle Data Pump.

    Fine when I run the interface. I am getting below error

    ODI-1227: task failed USER_DATA_SET (load) on the source of connection ORACLE EBS.

    Caused by: java.sql.SQLSyntaxErrorException: ORA-00923: KEYWORD not found where expected

    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:462)

    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:405)

    at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:931)

    at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:481)

    at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:205)

    at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:548)

    at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:217)

    at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1115)

    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1488)

    at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3769)

    at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3954)

    at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1539)

    at oracle.odi.runtime.agent.execution.sql.SQLCommand.execute(SQLCommand.java:163)

    at oracle.odi.runtime.agent.execution.sql.SQLExecutor.execute(SQLExecutor.java:102)

    at oracle.odi.runtime.agent.execution.sql.SQLExecutor.execute(SQLExecutor.java:1)

    at oracle.odi.runtime.agent.execution.TaskExecutionHandler.handleTask(TaskExecutionHandler.java:50)

    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.processTask(SnpSessTaskSql.java:2913)

    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2625)

    at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:577)

    at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:468)

    at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:2128)

    to oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$ 2.doAction(StartSessRequestProcessor.java:366)

    at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:216)

    at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:300)

    to oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$ 0 (StartSessRequestProcessor.java:292)

    to oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$ StartSessTask.doExecute (StartSessRequestProcessor.java:855)

    at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:126)

    to oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$ 2.run(DefaultAgentTaskExecutor.java:82)

    at java.lang.Thread.run(Thread.java:662)

    The generated code is

    create the table 780021 X

    (

    C1_FIRST_NAME,

    C2_MID_NAME,

    C3_LAST_NAME,

    C4_FULL_NAME,

    C5_NAME_SUFFIX,

    C6_SEX_MF_CODE,

    C7_SEX_MF_NAME,

    C8_COUNTRY_NAME,

    C9_LOGIN,

    C10_CREATED_BY_ID,

    C11_CHANGED_BY_ID,

    C12_CREATED_ON_DT,

    C13_CHANGED_ON_DT,

    C14_AUX1_CHANGED_ON_DT,

    C15_SRC_EFF_TO_DT,

    C16_INTEGRATION_ID,

    C17_EFFECTIVE_START_DATE

    )

    (EXTERNAL) ORGANIZATION

    TYPE oracle_datapump

    Dat_dir default DIRECTORY

    LOCATION ("X780021.exp")

    )

    PARALLEL

    in SELECT

    ALL_PEOPLE_F.FIRST_NAME,

    ALL_PEOPLE_F.MIDDLE_NAMES,

    ALL_PEOPLE_F.LAST_NAME,

    ALL_PEOPLE_F.FULL_NAME,

    ALL_PEOPLE_F.SUFFIX,

    ALL_PEOPLE_F.SEX,

    ALL_PEOPLE_F.SEX,

    ALL_PEOPLE_F.NATIONALITY,

    USER. USER_NAME,

    ALL_PEOPLE_F.CREATED_BY,

    ALL_PEOPLE_F.LAST_UPDATED_BY,

    ALL_PEOPLE_F.CREATION_DATE,

    ALL_PEOPLE_F.LAST_UPDATE_DATE,

    ALL_PEOPLE_F.CREATION_DATE,

    ALL_PEOPLE_F.EFFECTIVE_END_DATE,

    USER. USER_ID,

    ALL_PEOPLE_F.EFFECTIVE_START_DATE

    from APPS. FND_USER USER, APPS. PER_ALL_PEOPLE_F ALL_PEOPLE_F

    where (1 = 1)

    And (ALL_PEOPLE_F.PERSON_ID = USER. EMPLOYEE_ID)

    I don't see what is the problem here.

    Someone can help me.

    Thank you and best regards,

    Krishna Prasad

    I found the problem, its with the way ODI generated alias for the FND_USER table, by default it produces USER as an alias, which is a keyword from oracle. We just need to rename it to something else, and it worked.

  • a question about data pump

    Hello

    I'm running a full database export using data pump. (full = y)


    To ensure data integrity, I have blocked certain patterns before starting the exp.
    I want to assure you that these patterns locked still be exported (not not being ignored), right?

    Please help confirm.

    Thank you very much.

    db version: 10.2.0.3 in Linux 5

    Published by: 995137 on April 23, 2013 15:30

    Hello
    If a schema is locked/unlocked makes no difference to datapump - he extracted them anyway with a full extract. The log file should list all the tables that are expoted, so you should see them there.

    Kind regards
    Harry

  • ORA-39139: Data Pump

    As he tried to export certain patterns of mine, he ended up showing...
    'complete with 1 error (s)'

    Looking at the export log file that I found:
    ORA-39139: data pump does not support the XMLSchema objects. TABLE_DATA: "BETUSR." "' NSC_SERVICES ' will be ignored.

    What it means?

    expdp does not support the columns based on the XMLSchema before 11.x or XMLSchemas.
    To fix this, you can use the "old" exp utility

    Take a look at this note:
    How to move the XML schema based on data from xmltype 10 g to another schema / 11 g [1318012.1 ID]

  • Tablespace vs transportable Datapump Exp/IMP

    Hi guys,.
    10.2.0.5

    Will need your counselor here before some tests on this subject.

    I have 2 databases.
    One of them is the production database (main site) and the other a mview site (reading only mviews).
    I need two of them migrate from HP - UX to Solaris (different endian).

    For the Production of database where all the mview connects and master tables to, we can use the transportable tablespace. I assumed transportable tablespace should be ablt to transport logs mview as well? May indicate what are the types of objects tts don't migrate?

    For site mview, seems that transportable tablespace migrate mviews on. Therefore, the only option is by datapump exp/IMP.

    All suugestion for this scenario?

    Thank you!

    TT, all objects in the repository of data are migrated.

    See, if it's useful...
    Transportable tablespace (TTS) Restrictions and Limitations: details, reference and Version where there is [ID 1454872.1] down

Maybe you are looking for