Accelerate exp/imp or expdp/impdp

Hello

Is it possible to speed up and exp/imp or expdp/impdp that already works? Is is possible to speed up a run RMAN backup or restore RMAN process?

Kind regards

007

To accelerate a datapump export market and import you can attach to employment and increase the level of parallelism... impdp attach = job

I don't know any way to speed up a running RMAN backup.

To expedite an RMAN restore, you can kill the restoration and re - run using several channels.  The restoration should take up where he left off and can run faster with many channels.  It is relevant only if you have several items from backup.

Tags: Database

Similar Questions

  • IMP/exp expdp/impdp vs

    Hi all;
    which is better in performance the traditional exp/imp or expdp/impdp?

    Hello

    Oracle Data Pump is a more recent, rapid and flexible alternative to utilities 'exp' and 'imp' used in previous versions of Oracle. In addition to export and import of basic feature data pump provides a PL/SQL API and support for external tables

    If Oracle is recommended for all 10g databases and more use of Data Pump and pre 10g use old exp/imp :)

  • Add data using EXPDP/IMPDP

    I have a log table that is daily exported and imported to another schema. what I do currently, export the LOG table, delete the existing table of NEWSPAPER on the target schema, and then import in there.
    It is possible to add days of data only lasts rather export full table and import daily target schema table?

    Current exp/imp command:
    expdp system/XXXX SCHEMAS = LOG INCLUDE = TABLE: "IN ('DAILY_LOGGER')" directory = dumpdata dumpfile = daily_logger.dmp logfile = daily_logger_exp.log "

    Impdp system/xxx REMAP_SCHEMA = directory LOG: LOG_BKP = dumpdata PARALLEL = 1 dumpfile = daily_logger.dm logfile = daily_logger.dmp_imp.log'


    DB version:
    BANNER
    Oracle Database 11 g Enterprise Edition Release 11.1.0.6.0 - 64 bit Production
    PL/SQL release 11.1.0.6.0 - Production
    CORE 11.1.0.6.0 Production
    AMT for Linux: Version 11.1.0.6.0 - Production
    NLSRTL Version 11.1.0.6.0 - Production

    OS version:
    2.6.18 - 128.el5 Linux CLIENT1 #1 SMP Wed Jan 21 08:45:05 this 2009 x86_64 x86_64 x86_64 GNU/Linux

    Thank you

    Published by: najet on June 3, 2010 23:11

    You have single quotes:

    QUERY = log Journal_quotidien: "where quote_date".< to_char="" (sysdate="">

    Dean

  • expdp/impdp method in the database upgrade from 11.2.0.4 to 12.1.0.2 an error in logic of "command line"?

    Hi DBA:

    We are evaluating different methods of upgrading 11 GR 2 DB to 12 c (12.1.0.2) this year, it is very important for us to use the right method.  It was reported that when DB version was 9i and has been updated to 11g, there were some order of tables 'line' was wrong when execute SELECT statements in application (Java application) - at the moment I didn't if it is ' select * from... "or any specific part with a"where"clause, and what type of data was wrong (maybe sequence CLOB data, etc.).

    I think I know, ' exp/imp', "expdp/impdp" logical backup of DB methods.  Therefore, Oracle automatically import data and fix it in each table.  If no error in impdp connects after it's over, should not be any mistake, right?

    If the use of the method of 'creates a new data + expdp/impdp basis for schemas user' 11g 12 c port data, questions:

    1. this method will lead to erroneous sequence data when ordering?  If so, how to re - generate the sequence number?

    2. this method can bring CLOB, BLOB data and without any errors?  If error, how to detect and repair?

    I use the same method in the 10.2.0.5 to 11g port data, no problem.

    I'll keep this thread posted after getting more information from application.

    Thank you very much!

    Dania

    INSERT... SELECT, CREATE TABLE in the SELECT and Export-Import will all "re-organize" the table by assigning new blocks.

    In place--update (using DBUA or command line) don't re-create the user tables (although it DOES not rebuild the data dictionary tables).  Therefore, does not change the physical location of lines for user tables.  However, there is no guarantee that a further reason for INSERT, DELETE, UPDATE instructions does not change the physical location of ranks.  Nor is it a guarantee that a SELECT statement will return always stored in the same order unless an ORDER BY is specified explicitly.

    If you export the data while it is still used, you don't avoid update sequences in the source database.  Usually results in a mismatching number sequence between the source database and the new database (imported). In fact, even the numbers may be different, unless you use CONSISTENT or FLASHBACK_SCN.  (These two options in exp / expdp do NOT prevent update sequences).   The right way to manage sequences is to rebuild or increase them in data imported with the values in the data source, once the import is complete.

    Hemant K Collette

  • Table view is different in expdp/impdp?

    11.2 / Linux

    With exp/imp original: OWNER, TABLES and FULL modes are mutually exclusive. IE in original exp, if you want to make a backup of the EMP table in the scott schema, the following will be error
    exp / tables= EMP owner=SCOTT file=selected_tables.dmp log=selected_tables.log
    
    But this will end up in
    EXP-00026: conflicting modes specified
    EXP-00000: Export terminated unsuccessfully
    You must prefix the name of the owner (tables = < owner >. < table_name >) as below for the prescription above
    exp / tables= SCOTT.EMP file=selected_tables.dmp log=selected_tables.log
    But in impdp things work differently

    I provided a dump expdp of REG_TRACK_EVT of a CLS_USR_PROD scheme. I wanted to import that table in the schema of development CLS_USR_DEV. So, I did. But launch error ORA - 39166 TI kept as below
    $ impdp userid=\'/ as sysdba\' DIRECTORY=DPUMP_DIR DUMPFILE= reg_track_evt.dmp LOGFILE=reg_track_evt-imp.log tables=CLS_USR_DEV.REG_TRACK_EVT
    
    Import: Release 11.2.0.1.0 - Production on Mon Jul 30 18:24:32 2012
    
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-39002: invalid operation
    ORA-39166: Object CLS_USR_DEV.REG_TRACK_EVT was not found.
    To solve this problem, I had to remove the parameter TABLES (!) and use the REMAP_SCHEMA parameter
    impdp userid=\'/ as sysdba\' DIRECTORY=DPUMP_DIR DUMPFILE= reg_track_evt.dmp LOGFILE=reg_track_evt-imp.log REMAP_SCHEMA=CLS_USR_PROD:CLS_USR_DEV
    If this is the case, then what's the point of having the TABLES parameter in impdp?

    It's really the same thing regarding conflicting modes. Of what you have used in your import command:

    Impdp userid =------"/ ACE sysdba\ ' DIRECTORY = DUMPFILE DPUMP_DIR = LOGFILE reg_track_evt.dmp = reg_track_evt - imp.log REMAP_SCHEMA = CLS_USR_PROD:CLS_USR_DEV

    If you are remapping CLS_USR_PROD:CLS_USR_DEV then the table in the dumpfile is called CLS_USR_PROD. REG_TRACK_EVT and you tried to import CLS_USR_DEV. REG_TRACK_EVT and this table did not exist.

    What you specify on the tables = command is the table in the desired dumpfile. What you specify in the remapping of the color settings is where you want the table created, therefore, your first order was close, you need this:

    Impdp userid =------"/ ACE sysdba\ ' DIRECTORY = DUMPFILE DPUMP_DIR = LOGFILE reg_track_evt.dmp = reg_track_evt - imp.log tables = CLS_USR_PROD. REG_TRACK_EVT REMAP_SCHEMA = CLS_USR_PROD:CLS_USR_DEV

    I hope this helps.

    Dean

  • How do I only EXP/IMP data? 9i, only not using data bumper

    someone knows how to exp or imp only the data in the tables

    I have a dump of database user AA, it is rather small
    all my data are reset to default by running scripts, and I want to restore to the point of backup exp

    But if you imp all tables, views, triggers, they're all here

    I chose "" ignore the imp aa/aa leader = aa.dmp = y ', it doesn't import paritial of discharge "

    in any case, I can chose to imp data_only as impdp, or only data exp?


    I can't drop the user, if it is already connected many machine

    How am supposed to easily manage?

    Thank you, really


    BR/ricky

    With the traditional exp and services public imp there is no option to export only the data. The DDL for the exported objects are always exported. However, you can choose to export only the DDL without data using lines = n parameter.

    You say that your import is only a partial import. How do I? What was missing? What were your exact import settings because your message is missing either full = y or the fromuser = touser = parameters that I expect.

    If you truncate all existing objects, and then run an import job with ignore = y on a dmp file created as an export of the user, then the object creation fails for all pre-existing objects, but the data will be laid. If you are unable to truncate the tables target first then the import will be slow because he has a lot of unique to the import constraint violation error. Another approach would be to drop the user and all objects (or just all user objects) prior to importation and leave import restore user and user objects.

    If you use a dmp full export file then you should do a fromuser = touser = import.

    HTH - Mark D Powell.

  • Wise schema exp/imp or complete database exp/imp better in cross platform database 10g migration

    Hello

    When you perform a migration of database platform (big-endian to little endian) of 10 grams, which should be preferred to a database export import full or can do without schema using exp/imp?

    The data base is about 3 TB and a server Oracle ebs with many diagrams custom in it.

    Your suggestions are welcome.

    For EBS. export/import of individual schemas is not supported, because of the dependence between the different schemes of EBS - only complete export/import are supported.

    What are the exact versions of EBS and database?

  • problem in expdp/impdp

    Hi, I'm currently expdp/impdp but every time the same mistakes. OS win Server 03 on ORACLE 10 g.

    errors:

    C:\Documents and Settings\Administrateur > impdp rΘpertoire "/ as sysdba" = "data_pump_dir" dumpfile = logfile = scott.log'scott.dmp ' '

    Import: Release 10.2.0.3.0 - Production on Tuesday, March 25, 2014 10:43:18

    Copyright (c) 2003, 2005, Oracle.  All rights reserved.

    Connected to: Oracle Database 10 g Enterprise Edition Release 10.2.0.3.0 - Production

    With partitioning, OLAP and Data Mining options

    ORA-31626: there is no job

    ORA-04063: package body 'SYS. DBMS_INTERNAL_LOGSTDBY"contains errors

    ORA-06508: PL/SQL: called program unit is not found: 'SYS. "DBMS_INTERNAL_LOGSTDBY '

    ORA-06512: at "SYS." "KUPV$ FT", line 834

    ORA-04063: package body 'SYS. DBMS_INTERNAL_LOGSTDBY"contains errors

    ORA-06508: PL/SQL: called program unit is not found: 'SYS. DBMS_INTERNAL_LOGSTDBY ".

    SQL > alter the body of package DBMS_INTERNAL_LOGSTDBY compilation;

    WARNING: The bodies of Package modified with compilation errors.

    SQL > show error

    PACKAGE BODY DBMS_INTERNAL_LOGSTDBY errors:

    LINE/COL ERROR

    -------- -----------------------------------------------------------------

    2563/4 PL/SQL: statement ignored

    2563/20 PL/SQL: ORA-00942: table or view does not exist

    2625/3 PL/SQL: statement ignored

    2625/19 PL/SQL: ORA-00942: table or view does not exist

    2809/2 PL/SQL: statement ignored

    2809/13 PL/SQL: ORA-00942: table or view does not exist

    HELP ME PLEASE

    Since you're using Oracle 10 g version...

    Check if you have columns that contain "TEMESTAMP (6) ' data by using this query type:"

    Select dba_tab_columns owner, table_name, column_name data_type where column_name = 'TIMESTAMP' and data_type like '% TIMESTAMP ';

    If the previous query lists for you a list of tables, you must do the following for each table:

    ALTER table system.table_name change (date timestamp);

    Then recompile the package "DBMS_INTERNAL_LOGSTDBY" again.

    I hope this helps.

    Kind regards

  • Expdp/Impdp does not

    Hello!!

    I use oracle 10g Enterprise Edition on windows 7. I produced import and export several times and it was working fine until this day. But now, it is because of errors.

    UDI-00008: operation there was error ORACLE 31623

    ORA-31623: a job is not attached to this session by the specified handle

    ORA-06512: at "SYS." DBMS_DATAPUMP', line 2745

    ORA-06512: at "SYS." DBMS_DATAPUMP', line 3712

    ORA-06512: at line 1

    When I did google, I found the suggestion of memory settings and "aq_tm_processes". All are as it should. When I run the exp/imp command there is no dumpfile logfile created in the specified directory.

    Can it be question of privilege from read/write to the OS for user level?

    Can someone guide me please...

    Hi-

    cleaning of all the old data pump jobs. Note below will help you for the same

    How to cleaning DataPump orphan jobs in DBA_DATAPUMP_JOBS? (Doc ID 336014.1)

    check for any objects/package invalid

    SQL > connect as sysdba

    SQL > spool out.html

    SQL > set pagesize 9999

    SQL > the value html markup on

    SQL > select name from v$ database;

    SQL > SELECT * from v version $;

    SQL > SELECT distinct (length (addr) * 4) "Word Size" of the process of v$.

    SQL > SELECT master, object_name, object_type, status from dba_objects where status = 'INVALID ';

    SQL > select owner, object_type, count (*) in the dba_objects where status = 'INVALID' group by owner, object_type.

    SQL > SELECT SUBSTR(comp_id,1,15) identifiant_composant, status, version SUBSTR (version, 1, 10), modified,

    SUBSTR(Comp_Name,1,30) comp_name, substr (schema, 1, 15) in the form of schema

    FROM dba_registry by identifiant_composant;

    SQL > select * history of registry of $;

    increase the pool size of flows to 100-150 MB, and then run the export and let us know if the problem persists

    BR

    AK

  • expdp/impdp: constraints in the child-Parent relationship

    Hello

    I have a table parent1 and child1, child2 and chld3 tables have foreign key created on this parent1.

    Now, I want to do a delete on parent1. But as the number of records is very high on parent1, we go with expdp / impdp with option querry.

    I took the expdp parent1 level query. Now I dropped parent1 with option of cascade constraints and all foreign keys created by child1, 2 and 3 that parent1 references are automatically deleted.

    Now, if I have the impdp to the query of fire level dump file, are these foreign key constraints will be created automatically on child1, 2 and 3 or I need to manually recreate it?

    Kind regards

    ANU

    Hello
    The CF will not be in the dumpfile - see the code example below where I generate a sqlfile following pretty much the process that you would have done. This is because FK belongs to the DDL for the child table not the parent.

    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    
    OPS$ORACLE@EMZA3>create table a (col1 number);
    
    Table created.
    
    OPS$ORACLE@EMZA3>alter table a add primary key (col1);
    
    Table altered.
    
    OPS$ORACLE@EMZA3>create table b (col1 number);
    
    Table created.
    
    OPS$ORACLE@EMZA3>alter table b add constraint x foreign key (col1) references a(col1);
    
    Table altered.
    
    OPS$ORACLE@EMZA3>
    
    EMZA3:[/oracle/11.2.0.1.2.DB/bin]# expdp / include=TABLE:\"=\'A\'\"
    
    Export: Release 11.2.0.3.0 - Production on Fri May 17 15:45:50 2013
    
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    Starting "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04":  /******** include=TABLE:"='A'"
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    . . exported "OPS$ORACLE"."A"                                0 KB       0 rows
    Master table "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04" successfully loaded/unloaded
    ******************************************************************************
    Dump file set for OPS$ORACLE.SYS_EXPORT_SCHEMA_04 is:
      /oracle/11.2.0.3.0.DB/rdbms/log/expdat.dmp
    Job "OPS$ORACLE"."SYS_EXPORT_SCHEMA_04" successfully completed at 15:45:58
    
    Import: Release 11.2.0.3.0 - Production on Fri May 17 15:46:16 2013
    
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    Master table "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
    Starting "OPS$ORACLE"."SYS_SQL_FILE_FULL_01":  /******** sqlfile=a.sql
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Job "OPS$ORACLE"."SYS_SQL_FILE_FULL_01" successfully completed at 15:46:17
    
    -- CONNECT OPS$ORACLE
    ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
    -- new object type path: SCHEMA_EXPORT/TABLE/TABLE
    CREATE TABLE "OPS$ORACLE"."A"
       (    "COL1" NUMBER
       ) SEGMENT CREATION IMMEDIATE
      PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
     NOCOMPRESS LOGGING
      STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "SYSTEM" ;
    -- new object type path: SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    ALTER TABLE "OPS$ORACLE"."A" ADD PRIMARY KEY ("COL1")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
      STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "SYSTEM"  ENABLE;
    -- new object type path: SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    DECLARE I_N VARCHAR2(60);
      I_O VARCHAR2(60);
      NV VARCHAR2(1);
      c DBMS_METADATA.T_VAR_COLL;
      df varchar2(21) := 'YYYY-MM-DD:HH24:MI:SS';
     stmt varchar2(300) := ' INSERT INTO "SYS"."IMPDP_STATS" (type,version,flags,c1,c2,c3,c5,n1,n2,n3,n4,n5,n6,n7,n8,n9,n10,n11,n12,d1,cl1) VALUES (''I'',6,:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,NULL,:14,:15,NULL,:16,:17)';
    BEGIN
      DELETE FROM "SYS"."IMPDP_STATS";
      c(1) := 'COL1';
    
      DBMS_METADATA.GET_STAT_INDNAME('OPS$ORACLE','A',c,1,i_o,i_n);
      EXECUTE IMMEDIATE stmt USING 0,I_N,NV,NV,I_O,0,0,0,0,0,0,0,0,NV,NV,TO_DATE('2013-05-17 15:43:24',df),NV;
    
      DBMS_STATS.IMPORT_INDEX_STATS('"' || i_o || '"','"' || i_n || '"',NULL,'"IMPDP_STATS"',NULL,'"SYS"');
      DELETE FROM "SYS"."IMPDP_STATS";
    END;
    /
    

    Kind regards
    Harry

    http://dbaharrison.blogspot.com/

  • expdp, impdp Grid 11.2 and ACFS

    Version of the grid: 11.2.0.3
    OS: Red Hat Enterprise Linux 5.4

    A few months, our cluster of CARS, while taking a backup expdp in a local Linux file system, I got some errors. Remember the error code or the script now that I had too much work this day. The problem was solved when we used a location of file system ACFS as the directory for expdp object.

    Today, in the same cluster RAC, to reproduce that issue, I tested it with an expdp backup in a local Linux format
    file system (/ home/oracle/pumpDir) and the expdp finished without any problems.

    Someone at - it all met problems with expdp, impdp in environment RAC cluster in reason to use a local Linux file system?

    The error would be more probably "file not found".

    expdp/impdp (datapump) will begin in several parallel processes on the cluster. If all nodes cannot find the 'local' directory, you will get the above error.

    Bottom line, the parallel processes expdp may or may not be on a single node, so the 'directory' MUST be accessible across the cluster.

  • Tablespace vs transportable Datapump Exp/IMP

    Hi guys,.
    10.2.0.5

    Will need your counselor here before some tests on this subject.

    I have 2 databases.
    One of them is the production database (main site) and the other a mview site (reading only mviews).
    I need two of them migrate from HP - UX to Solaris (different endian).

    For the Production of database where all the mview connects and master tables to, we can use the transportable tablespace. I assumed transportable tablespace should be ablt to transport logs mview as well? May indicate what are the types of objects tts don't migrate?

    For site mview, seems that transportable tablespace migrate mviews on. Therefore, the only option is by datapump exp/IMP.

    All suugestion for this scenario?

    Thank you!

    TT, all objects in the repository of data are migrated.

    See, if it's useful...
    Transportable tablespace (TTS) Restrictions and Limitations: details, reference and Version where there is [ID 1454872.1] down

  • exp/imp with sequential primary key table.

    Hello

    I have a general question about EXP/IMP a table with primary key sequence. I need exp rows in this table of 11g DB and their imp to 9i DB. This table is the same on 11g and 9i. Table 11g is updated daily. I want to import lines which are the only new records to 9i to expedite the process. As this main table of the key is sequential, I intend to export with where table_key > N, N is the max of last importing table_key and then proceed to import on this dump file. Don't you see any problem doing it this way?
    Your expertise is greatly appreciated!

    Hello

    I have no problem at all.
    If you do not forget to use the 9i export tool, then you should be OK
    Also a full table export and import with ignore = Yes will ignore the records that violate the primary key and import only new records.
    However, it is not a very clean way to do it.

    Success!
    FJFranken

  • exp/imp only a few records in the table

    Hello
    Exp/imp few records in my table during the activity level exp/imp table is possible?
    If so, what setting should I be using
    For example,.
    I have 10000 records in my narration of the table "TEST_TB".
    I have the dump full exp for the table.
    But I want to just imp only 50000 records from the same table in another schema.
    How would it be possible?
    Please suggest me on this

    Kind regards
    Faiz

    Hello

    It seems not possible to limit the number of rows that is imported, but it is possible to limit the number of rows exported with the query parameter, found examples
    http://www.orafaq.com/wiki/Import_Export_FAQ
    http://docs.Oracle.com/CD/B28359_01/server.111/b28319/exp_imp.htm#autoId52

    HtH
    Johan

  • exp / imp question to support

    Hello @ all.
    It exist in some other forums tools exp and imp will be launched out of the product. Does anyone know if this is true and when it will arrive?
    Thank you very much.

    It has not been started, you can still use exp/imp in version 11 GR 2.

    In Oracle 10 g is a better tool called "Data Pump" for exports or imports

Maybe you are looking for