Options of data compared to exp/imp pump

I use data for the first time pump and I export 1 discount table in another case where it has been accidentally truncated. I don't see the options that I'm used to exp/imp data pump. For example, to ignore create errors if the structure of the table exist already (ingnore =) or to not load the index (index =). The press review of the Oracle I have has a reading from now is not even talking about these issue, or how Data Pump can handle them. Please let me know if you have experience with this and if it is a question.
I ran datapump to export the table with data only and meta_data. I couldn't read the meta data. I expected little readable create instructions, but is binary and xml as statements. Not yet clear how I could use it. Now, I try to take my only export and loading data in the 2nd instance. The table already exists but is truncated, and I don't know how it will load the index. Please bring me up-to-date if you can.

Thank you
Tony

Hello

You should read the oracle documentation and he got very good examples

http://download.Oracle.com/docs/CD/B19306_01/server.102/b14215/dp_overview.htm#sthref13

Parameter mapping pump export of data to the Original export utility

http://download.Oracle.com/docs/CD/B19306_01/server.102/b14215/dp_export.htm#sthref181

Concerning

Tags: Database

Similar Questions

  • replaces data pump really exp/imp?

    Hi guys,.

    Ive read some people saying that we should use data instead of exp/IMP pump But as far as I can see, if I have a database behind a firewall somewhere else and can't connect to this database directly and need to get some acrposs data, then data pump is useless to me and I can't just exp and imp data.

    OracleGuy777 wrote:
    cannot be resolved my Data Pump. Yet people are saying exp and imp will become obsolete and no longer supported. If Oracle will do anything for people who have the same problem as me?


    The sky is not falling. He will not fall for a few years yet. Deprecated! = not supported. Deprecated = not recommended.

    The time will come when exp/imp will be not supported. But I don't think that to happen for a few versions of the database, particularly because it does not all areas of the problem.

  • exp/imp without data creates huge empty db!

    Hello
    I have a db of 8.1.7.4 on windows 2000 400GO
    I wanted to exp / imp, but I don't want the data, just the structure and code

    So I did

    exp x/y@db LEADER = D:\exp_database.dmp LINES = COMPATIBLE N = Y FULL = Y CONSTRAINTS = Y GRANTS = INDEX Y = Y

    IMP x/y@db2 FILE = D:\exp_database.dmp ROWS = N FULL = Y CONSTRAINTS = Y GRANTS = INDEX Y = Y Feedback = 1000


    the 400GO exported to the file of 7 M, what's good
    but when I import the 7 M, it creates a VACUUM 160 GB database!
    I see that 'some' of the data files are full size, even it's empty!
    no reason for this, is the structure take 40% of the db?




    Thank you

    The 'COMPRESS' export setting determines how the CREATE TABLE statement must be created.

    By default, 'Y' 'COMPRESS' that is, a CREATE TABLE statement is generated with a LEADING period allocated size as large as the table (all extensions of the combined table) - without worrying about whether all lines exist in the table or all lines are exported or all lines are imported.

    You export with 'COMPRESS = N. Which generates instructions CREATE TABLE with INITIAL as large as the INITIAL of the exported table report. However, if the source table has an INITIAL definition of, say, 100 M, again, regardless of the number of rows present/imported/exported, the import will be CREATE TABLE with INITIAL of 100M. In these cases, you need to generate the CREATE TABLE statement and create the tables manually.

    Hemant K Collette
    http://hemantoracledba.blogspot.com

    Published by: Hemant K Collette on July 29, 2010 14:00

  • Tablespace vs transportable Datapump Exp/IMP

    Hi guys,.
    10.2.0.5

    Will need your counselor here before some tests on this subject.

    I have 2 databases.
    One of them is the production database (main site) and the other a mview site (reading only mviews).
    I need two of them migrate from HP - UX to Solaris (different endian).

    For the Production of database where all the mview connects and master tables to, we can use the transportable tablespace. I assumed transportable tablespace should be ablt to transport logs mview as well? May indicate what are the types of objects tts don't migrate?

    For site mview, seems that transportable tablespace migrate mviews on. Therefore, the only option is by datapump exp/IMP.

    All suugestion for this scenario?

    Thank you!

    TT, all objects in the repository of data are migrated.

    See, if it's useful...
    Transportable tablespace (TTS) Restrictions and Limitations: details, reference and Version where there is [ID 1454872.1] down

  • exp / imp question to support

    Hello @ all.
    It exist in some other forums tools exp and imp will be launched out of the product. Does anyone know if this is true and when it will arrive?
    Thank you very much.

    It has not been started, you can still use exp/imp in version 11 GR 2.

    In Oracle 10 g is a better tool called "Data Pump" for exports or imports

  • Wise schema exp/imp or complete database exp/imp better in cross platform database 10g migration

    Hello

    When you perform a migration of database platform (big-endian to little endian) of 10 grams, which should be preferred to a database export import full or can do without schema using exp/imp?

    The data base is about 3 TB and a server Oracle ebs with many diagrams custom in it.

    Your suggestions are welcome.

    For EBS. export/import of individual schemas is not supported, because of the dependence between the different schemes of EBS - only complete export/import are supported.

    What are the exact versions of EBS and database?

  • I use the same query in two difference db (exp/imp) but run different times!

    Hello

    I have 2 database. One of the remote server, one at my local. I used exp/imp to retrieve data from remote to my local database. Structers, tables and indexes is the same.
    My local machine is more powerful than the remote computer. (Ram, cpu County ext)

    But when I connect remotely from my local computer, this application runs 4 second (with view). But when I connect to my local db, this query run 5 minutes.
    Number of rows returned is 18,500

    What's wrong? Why get the bytes of the value are different?

    Local explain plan is:
    SELECT STATEMENT ALL_ROWS Cost: 203 Bytes: 19,062,160 Cardinality: 18,329                                         
    Remote control explain plan is:
    SELECT STATEMENT ALL_ROWS Cost: 226 Bytes: 3,855,214 Cardinality: 18,446                                         
    My plans to explain (remote and the) and other (auto track, oracle params) data is in excel file (because of this forum not to accept more than 30,000 words);

    http://www.eksicevap.com/tech/explainplan.xls
    for winrar:
    http://www.eksicevap.com/tech/explainplan.rar

    Thanks for help

    Oracel version information:
    Local:
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    Distance:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi

    Published by: Melike on 15.Mar.2011 02:42

    Published by: Melike on 15.Mar.2011 03:39

    Melike says:
    Thanks, Johan, I'll try.
    My first post for local oracle param, I got it 3 days ago. or I confuse my file params to oracle. (This problem continues from last week)
    Now I'm waiting still current oracle local params and these value is worse:

    Local:
    pga_aggregate_target 0
    SGA_MAX_SIZE 536.870.912
    SGA_TARGET 0

    I update the new oracle local params in my previous message.

    I'll try the update of these values, but I hope, my local oracle is not drop down :)

    Can't be worse...
    You have a parameter memory_target (and max_memory_target)? If so, you could add memory to these settings as well and only set pga_aggregate_target (no CMS or sga_max)

    My settings (on a test/dev/lab-box):
    System@oracle11 SQL > see the memory settings

    VALUE OF TYPE NAME
    ------------------------------------ ----------- ------------------------------
    hi_shared_memory_address integer 0
    whole big memory_max_target 3G
    whole large memory_target 3G
    shared_memory_address integer 0
    System@oracle11 SQL > show parameter sga

    VALUE OF TYPE NAME
    ------------------------------------ ----------- ------------------------------
    lock_sga boolean FALSE
    PRE_PAGE_SGA boolean FALSE
    SGA_MAX_SIZE large integer 0
    SGA_TARGET large integer 0
    System@oracle11 SQL > show parameter pga

    VALUE OF TYPE NAME
    ------------------------------------ ----------- ------------------------------
    pga_aggregate_target large integer 0

    This way I pushed things like Oracle by itself, memory allocation to everywhere where oracle needs him the most.

    Brgds
    Johan

  • 10g Full exp/imp using the sys user

    Hello

    I plan to do an export of complete data of data from a database to a different database created on another server/platform.
    Oracle version/patch level is the same 10.1.0.5.

    I also plan to use exp/imp instead of datapump, as I'm not sure how stable Datapump is in this version.

    I want to export and import as "/ as sysdba" because it pays a sys privileges for users of database application.

    file = filename log full filename.log = exp = y
    Full IMP = log y = filename ignore = there give = filename.dmp

    ignore = y, because the tablespaces wil lbe created in advance due to the disposal of the other file system.

    The above method is ok or it will cause me problems

    Thank you

    Hello

    exp/imp does not affect accounts system/sys. they are not at all affected.

    Concerning

  • Accelerate exp/imp or expdp/impdp

    Hello

    Is it possible to speed up and exp/imp or expdp/impdp that already works? Is is possible to speed up a run RMAN backup or restore RMAN process?

    Kind regards

    007

    To accelerate a datapump export market and import you can attach to employment and increase the level of parallelism... impdp attach = job

    I don't know any way to speed up a running RMAN backup.

    To expedite an RMAN restore, you can kill the restoration and re - run using several channels.  The restoration should take up where he left off and can run faster with many channels.  It is relevant only if you have several items from backup.

  • exp/imp with sequential primary key table.

    Hello

    I have a general question about EXP/IMP a table with primary key sequence. I need exp rows in this table of 11g DB and their imp to 9i DB. This table is the same on 11g and 9i. Table 11g is updated daily. I want to import lines which are the only new records to 9i to expedite the process. As this main table of the key is sequential, I intend to export with where table_key > N, N is the max of last importing table_key and then proceed to import on this dump file. Don't you see any problem doing it this way?
    Your expertise is greatly appreciated!

    Hello

    I have no problem at all.
    If you do not forget to use the 9i export tool, then you should be OK
    Also a full table export and import with ignore = Yes will ignore the records that violate the primary key and import only new records.
    However, it is not a very clean way to do it.

    Success!
    FJFranken

  • exp/imp only a few records in the table

    Hello
    Exp/imp few records in my table during the activity level exp/imp table is possible?
    If so, what setting should I be using
    For example,.
    I have 10000 records in my narration of the table "TEST_TB".
    I have the dump full exp for the table.
    But I want to just imp only 50000 records from the same table in another schema.
    How would it be possible?
    Please suggest me on this

    Kind regards
    Faiz

    Hello

    It seems not possible to limit the number of rows that is imported, but it is possible to limit the number of rows exported with the query parameter, found examples
    http://www.orafaq.com/wiki/Import_Export_FAQ
    http://docs.Oracle.com/CD/B28359_01/server.111/b28319/exp_imp.htm#autoId52

    HtH
    Johan

  • export / import exp / imp orders Oracle 10gXE on Ubuntu

    I have 10gXE Oracle installed on 2.6.32 - 28-generic #55 - Ubuntu Linuxand I need help of soe on how to export / import the base with exp / imp orders. Orders appear to be installed on the directory * / usr/lib/oracle/xe/app/oracle/product/10.2.0/server/bin*, but I can't run them.

    The error message I got =

    No order found "exp", did you mean:
    Command "xep" package "pvm-examples" (universe)
    Command 'ex' package 'vim' (main)
    Command 'ex' the package 'nvi' (universe)
    Command 'ex' of the package "vim - nox" (universe)
    Command 'ex' of the package "vim-gnome" (main)
    Command 'ex' package 'vim-tiny"(main)
    Command 'ex' of the package "vim - gtk" (universe)
    Command "axp" package "axp" (universe)
    Command "expr" package "coreutils" (main)
    Command 'expn' package 'sendmail-base' (universe)
    Command 'EPP' package 'e16"(universe)
    Exp: command not found

    Is there something I need to do?

    Hello

    You have not properly configured environment variables.
    http://download.Oracle.com/docs/CD/B25329_01/doc/install.102/b25144/TOC.htm#BABDGCHH

    And of course this sciprt little hiccup, so see
    http://UbuntuForums.org/ShowPost.php?p=7838671&postcount=4

    Kind regards
    Jari

  • How to change the DB via utility exp/imp

    Hi Experts,

    Monetary my DB version is 9.2.0.10. I need to upgrade the DB 9.2.0.1.0 to 10.2.0.3.0. Can I know the steps to upgrade in utility exp/IMP

    Give me how to proceed?


    Thank you
    Priya

    http://download.Oracle.com/docs/CD/B19306_01/server.102/b14238/expimp.htm#i262220

  • Golden Gate lot compared to pump Exp - Imp data replication

    Hello
    I have a Production server running many jobs of DBMS for 4 to 5 hours after the closure of the opening hours.

    Procedure:

    Time activity

    The work of DBMS T1 run
    The work of DBMS T2 run
    T3 to generate reports
    Work T4 DBMS run
    The work of DBMS T5 run

    Purposes of the declaration, I would like to use a different server so that it does not stop at the time T3 and can move forward for execution.

    I have two solutions on the spot.

    (A) at time T3 launch replication batch Golden Gate. After the updated data (IE until T2) is replicated first stage T3

    (B) take an incremental backup after T2 and importation of report server. After the backup is made go ahead with T3.

    What solution will take less time? Replication of incremental backup or Golden Gate lot?

    Thank you for your contributions.

    Hiren Pandya s

    GoldenGate is not a tool "batch replication." This is a continuous real-time re-captured & apply the tool. If you try to use it for replication 'batch', you not only make life difficult for yourself, but you are also seriously limiting its potential.

    If you have two servers, a production server 'A' and a 'B' report server, then just turn on the replication of the GoldenGate all the time; for example, using your example (replication from Server A to Server B):

    = Activity time =
    T1' (was Q3) start GG (A-> B) to generate reports out of 'B '.
    T2' (was Q1) DBMS job run
    T3 "(a T2) DBMS jobs run"
    T4' (was Q3)-> wait 'backwardness' of A-B to be zero-> can generate reports. (Possibly disable GG while reports are running.)
    T5' (was Q4) work of DBMS (GG can continue to reproduce at any time...)
    T6' (a T5) DBMS job run

    Note that if GG will run without interruption of time T1 = ", then, at the time where you get to T4, you probably only need a few seconds or a few minutes before the report instance is caught up ("offset = 0 "). If you want to report off Server B without the results of the use of DBMS (T5 and T6'), then interrupt the replication until you have completed running reports. You can return to the replication at any time - no data will be lost, as GG always picks up back where he was arrested.

    (I hope I understood the problem correctly).

    See you soon,.
    m

  • How do I only EXP/IMP data? 9i, only not using data bumper

    someone knows how to exp or imp only the data in the tables

    I have a dump of database user AA, it is rather small
    all my data are reset to default by running scripts, and I want to restore to the point of backup exp

    But if you imp all tables, views, triggers, they're all here

    I chose "" ignore the imp aa/aa leader = aa.dmp = y ', it doesn't import paritial of discharge "

    in any case, I can chose to imp data_only as impdp, or only data exp?


    I can't drop the user, if it is already connected many machine

    How am supposed to easily manage?

    Thank you, really


    BR/ricky

    With the traditional exp and services public imp there is no option to export only the data. The DDL for the exported objects are always exported. However, you can choose to export only the DDL without data using lines = n parameter.

    You say that your import is only a partial import. How do I? What was missing? What were your exact import settings because your message is missing either full = y or the fromuser = touser = parameters that I expect.

    If you truncate all existing objects, and then run an import job with ignore = y on a dmp file created as an export of the user, then the object creation fails for all pre-existing objects, but the data will be laid. If you are unable to truncate the tables target first then the import will be slow because he has a lot of unique to the import constraint violation error. Another approach would be to drop the user and all objects (or just all user objects) prior to importation and leave import restore user and user objects.

    If you use a dmp full export file then you should do a fromuser = touser = import.

    HTH - Mark D Powell.

Maybe you are looking for

  • Why my bug reports have not been sorted for more than four months?

    https://Bugzilla.Mozilla.org/buglist.cgi?bug_status=unconfirmed & bug_status = NEW & bug_status = ASSIGNED & bug_status = REOPENED & emailassigned_to1 = 1 & emailreporter1 = 1 & emailtype1 = exact & email1=a.vanloon%40alexandervanloon.nl & field0-0-0

  • Impossible to connect a network drive by name of host but able to match with the ip address

    is the act as an application server and the file server. We use it as normal, but it is suddenly unable to resolve host name for other machines. All servers and workstations are not able to map the drive network under with the name of host, but every

  • Sansa m250 - volume number change

    Hello! I have a m250 for a few years and I am very happy. About half a year I noticed something strange - sometimes when I pressed play/pause the player behavior was as if I pressed the volume button +. Maybe happened once in five times I pressed the

  • WSA - SSH Vulnerability Patch-

    Hello We are trying to install the cisco-sa-20150625-ironport patch on our WSA. When we do the instalation, the WSA restart normally, but the patch, still on display in the available updates. Is this normal. Does anyone else have this problem?

  • How can I change the decision of placement / of picuters in a folder?

    I'm just rearange the order of images in a folder.  I have recently had problems with the way my computer downloads of photos/videos from my camera.  Rather than upload them in the order they were taken, they are downloaded and placed in a random ord