parallel expdp on bigfile

EXPDP even in parallel using just sit for 8 hours of backup a bigfile tablespace...

RMAN has the ability to use: section size ##G

that allows parallel on RMAN to work effectively and run the parallel bigfile...

but I'm on 11.2.0.3 and there is a bug in rman section size that makes sunk...

only solution is to upgrade which I am unable to do at the moment...

is there an equivalent section expdp size to allow it to align a bigfile?

As says ground - things datapump must do series (at least in the versions up to 12 c as far as I know).

The only way to try to solve this problem is to be

(a) partition the table source (not a 5 minute job) - then each slave can make a partition of each

(b) have 8 separate jobs using all queries = xxx (where xx can subset of data) - it's quite messy, but it can be done

See you soon,.

Rich

Tags: Database

Similar Questions

  • expdp on cluster - how to determine where is the dumpfile GB

    I am running with parallel expdp = cluster 4 = y and everything works fine except that I have to get on all 4 nodes where the file was put.
    Is there a sql query that would tell me on what node of the file would be written to?

    Hello

    First of all, make sure points of object directory of shared storage that can be seen by all Oracle RAC designated instances to run Data Pump of work processes.
    dba_directories queries and see the directories of dumpfiles,.

    See this post
    http://download.Oracle.com/otndocs/products/database/enterprise_edition/utilities/PDF/datapump11gr2_twp_rac_1009.PDF

  • Export data from site 1 site 2

    Hi all

    I have Oracle 10GR 1 migration to Oracle 11 GR 2 site 1 site 2. I can't use parallel expdp option (Oracle 10.1 standard edition).
    I had to do an export from site 1 on a NAS and I had to put the SIN on the site 2 and do an import.

    You have a way to do it in a much faster way?

    Thank you!

    Published by: user12064507 on October 19, 2011 08:26

    PL see the steps in the Upgrade Guide - http://download.oracle.com/docs/cd/E11882_01/server.112/e23633/expimp.htm

    HTH
    Srini

  • Accelerate the expdp with PARALLEL work and the size of the settings FILE

    Every day we save 6 patterns with a total area of 80 GB.
    Oracle documentation, I think the PARALLELS servers work well when we split the dump file, because each slave process can work with a separate file.
    But I don't know how many parallel processes should be generated and the number of files this dump file must be split?

    The command expdp that we plan to use
    expdp userid=\'/ as sysdba\' SCHEMAS = schema1,schema2,schema3,schema4,schema5,schema6  DUMPFILE=composite_schemas_expdp.dmp LOGFILE=composite_schemas_expdp.log  DIRECTORY=dpump_dir2 PARALLEL=3
    Related information:

    11.2.0.2

    Solaris 10 (x86_64) on HP Proliant Machine

    8 CPU with 32 GB of RAM
    SQL > show parameter parallel
    
    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    fast_start_parallel_rollback         string      LOW
    parallel_adaptive_multi_user         boolean     TRUE
    parallel_automatic_tuning            boolean     FALSE
    parallel_degree_limit                string      CPU
    parallel_degree_policy               string      MANUAL
    parallel_execution_message_size      integer     16384
    parallel_force_local                 boolean     TRUE
    parallel_instance_group              string
    parallel_io_cap_enabled              boolean     FALSE
    parallel_max_servers                 integer     32
    parallel_min_percent                 integer     0
    
    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    parallel_min_servers                 integer     0
    parallel_min_time_threshold          string      AUTO
    parallel_server                      boolean     TRUE
    parallel_server_instances            integer     2
    parallel_servers_target              integer     32
    parallel_threads_per_cpu             integer     2
    recovery_parallelism                 integer     0

    resistanceIsFruitful wrote:
    But I don't know how many parallel processes should be generated and the number of files this dump file must be split?

    How many parallel processes you need, it's something you can figure out to run the tests against your db, but if you have parallel set to N, then you have at least N dump files in order to use fully all parallel threads spawned. We take backups using parallel = 6 and dumpfile is normally set to dumpfile=dbname.%u.dmp where oracle expands %u necessary if you do not explicitly list individual files.

  • expdp with parallel writing in a single file on the OS

    Hi friends,
    I am facing a strange problem. Despite giving parallel = x parameter the expdp is written on a single file at the level of the OS at a time, even if it's written in several files in order (not at the same time)

    While on other servers, I see expdp is able to start writing in several files simultaneously. Here is the example of newspaper
    of my expdp.
    ++++++++++++++++++++
    Export: Release 10.2.0.3.0 - 64 bit Production on Friday, April 15, 2011 03:06:50

    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    ;;;
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64 bit Production
    With partitioning, OLAP and Data Mining options
    Start "CNVAPPDBO4". "' EXPDP_BL1_DOCUMENT ': table CNVAPPDBO4/***@EXTTKS1 = BL1_DOCUMENT DUMPFILE=DUMP1_S:Expdp_BL1_DOCUMENT_%U.dmp LOGFILE = LOG1_S:Expdp_BL1_DOCUMENT.log CONTENT = DATA_ONLY FILESIZE = 5 G EXCLUDE = INDEX, STATISTICS, CONSTRAINTS, GRANT PARALLEL = 6 JOB_NAME = Expdp_BL1_DOCUMENT
    Current estimation using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 23,93 GB
    . . exported "CNVAPPDBO4." "' BL1_DOCUMENT ' 17,87 GB 150951906 lines
    Table main "CNVAPPDBO4." "' EXPDP_BL1_DOCUMENT ' properly load/unloaded
    ******************************************************************************
    Empty set of files for CNVAPPDBO4. EXPDP_BL1_DOCUMENT is:
    /tksmig/load2/Oracle/postpaidamd/DUMP1_S/Expdp_BL1_DOCUMENT_01.dmp
    /tksmig/load2/Oracle/postpaidamd/DUMP1_S/Expdp_BL1_DOCUMENT_02.dmp
    /tksmig/load2/Oracle/postpaidamd/DUMP1_S/Expdp_BL1_DOCUMENT_03.dmp
    /tksmig/load2/Oracle/postpaidamd/DUMP1_S/Expdp_BL1_DOCUMENT_04.dmp
    Work "CNVAPPDBO4". "' EXPDP_BL1_DOCUMENT ' successfully completed at 03:23:14

    ++++++++++++++++++++

    uname-a
    HP - UX ocsmigbrndapp3 B.11.31 U ia64 3522246036 unlimited-license user


    He hits a known bug? Please suggest.

    regds,
    Malika

    http://download.Oracle.com/docs/CD/E11882_01/server.112/e16536/dp_export.htm#i1006259

  • Expdp parallel option

    Hi all

    I work with a 10.2.0.3 version RAC on a two node RAC.

    Every night I have an export using expdp and I need to reduce the time to spend on it. In fact, I'm with parallel = 4 but, I want to know if I can increase it. The server after I am executing the export job have 4 processors with 4 cores each.

    I read I can change the parallel value until the same number of available CPUs * 2.

    In my case, how do I count the number of CPU? 4? 16?

    Kind regards
    dbajug

    Parallel data pump features,

    http://download.Oracle.com/otndocs/products/database/enterprise_edition/utilities/PDF/datapump11gr2_twp_rac_1009.PDF

    Thank you

    Published by: Cj on January 5, 2011 01:46

  • Error in expdp

    Hello

    I'm trying to export a schema with 2 tables with the CLOB (XML file stored) data type. While the expdp am getting error below.

    ORA-39097: Data Pump job encountered the error unexpected-12801

    ORA-39065: exception unexpected master process in HAND

    ORA-12801: error reported in the PZ99, instance XXXXXX parallel query server

    ORA-01460: dead letter or unreasonable conversion requested

    Job 'XXXXXXXX '. "" SYS_EXPORT_SCHEMA_01 "stopped because of the mistake at 03:46:32

    already my parameter PARALLEL_DEGREE_POLICY value is MANUAL. Help, please

    The error that you get a little weird, so if I were you, and if the error is reproducible, I would trace your datapump for more information.  You could follow the datapump job Oracle sessions or add "trace = 1FF0300" to the command line expdp; This will create tracking in your folder of trace files.  Search for 'trace = 1FF0300' to find out how it works.

  • Error in database expdp remote

    Hello

    When you attempt to perform an export on a remote database, I get the following errors:

    ORA-39006: internal error

    ORA-39065: exception unexpected master process in SHIPPING

    ORA-06575: Package or function KUPM$ MCP is in an invalid state

    ORA-39097: Data Pump task encountered the unexpected-6575 error

    Local DB running the export

    Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64 bit Production

    PL/SQL Release 11.2.0.4.0 - Production

    CORE Production 11.2.0.4.0

    AMT for Linux: Version 11.2.0.4.0 - Production

    NLSRTL Version 11.2.0.4.0 - Production

    Exported remote DB

    Database Oracle 12 c Enterprise Edition Release 12.1.0.2.0 - 64 bit Production

    PL/SQL Release 12.1.0.2.0 - Production

    CORE Production 12.1.0.2.0

    AMT for Linux: Version 12.1.0.2.0 - Production

    NLSRTL Version 12.1.0.2.0 - Production

    Export command

    expdp / full = directory = network_link db_dump remote_db dumpfile = db.dmp logfile = compression = db.log = all parallel = 3 = db_dump exclude = job_name statistics, Greenbelt:------"IN\ (\'\PLAN_TABLE\',\'TOAD_PLAN_TABLE\',\'OL\$\'\) \".

    Please note that the distance DB is a DR of guard of my production server database, the following sql commands are used to bring the distance DB online for export:

    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

    ALTER DATABASE OPEN READ-ONLY.

    This process worked when DBs local and remote were 11g, what Miss me here to complete this export with 12 c?

    Thank you

    Hey,.

    I wanted to just follow up on this question in case someone
    else is going through the same problem. To solve this problems, two steps, first,
    install the patch 21252520 then around following works:

    1. Physical conversion standby DB
      on the eve of the snapshot
    2. Open read-write
    3. Convert the eve of the snapshot
      physical standby
  • Parallel parameter really reduce time for export?

    Hi all;

    I do realize too fast, if I use a parallel option when exporting.

    As we know, the DIRECT PATH is much faster than the ACCESS EXTERNAL path.

    With EXPDP reference documents.

    When taking export using the parallel option, data access method would be 'EXTERNAL TABLE'

    For large tables data pump always using external path rather than direct path.

    in above case, if the size of the table in popular Go with "n" number of partitions, I can use parallel when the export option?

    The version is 11.2.0.1 on RHEL

    Hello

    In this case, there are two types of parallel

    external table 1) using a parallel query - what is likely to be slower in many cases only a process series using the direct path - query in parallel or single worker process usnig

    (2) extract several tables "at the same time" and each made as direct path. -many workers all do another object at the same time

    Thus, for example, that if you say Parallels 4, it is possible to get 4 datapump work process, each of these processes can extract another table at the same time, each of these excerpts is in the direct path,.

    For most tables that's how it will work, if you use the query or you have some weird datatype or feature, then run the path may not be possible but in almost all cases of Parallels to export must be faster - because of the parallel unloading of objects.

    Parallel queries to a single table for the external table access is generally slower.

    Hope that makes sense...

    See you soon,.

    Rich

  • Expdp RAC error

    HI guys

    I tried to expdp in the database of node 2 RAC 11 GR 2.

    expdp dumpfile=testenet_05052014%U.dmp logfile = testenet.log patterns = test_enet = compressing backup directory = all the cluster parallel = N = 2

    Export: Release 11.2.0.4.0 - Production on Mon May 5 08:49:03 2014

    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

    Username: test_enet

    Password:

    DEU-01034: operation generated ORACLE error 1034

    ORA-01034: ORACLE not available

    ORA-27101: shared memory realm does not exist

    Linux-x86_64 error: 2: no such file or directory

    UDE 00003: all authorized logon attempts failed

    Thanks in advance

    REDA

    Hello

    When you say sqlplus works how you connect - expdp uses the same process of connection so if we work the other should too.

    you do sqlplus user/pass@db

    and expdp user/pass (without the @db)?

    Maybe that's part of it (where my question on ORACLE_SID).

    Be aware that the "directory" is an object of directory oracle rather than a directory on the file system - so something like

    SQL > create directory as ' / tmp';

    expdp directory = test

    See you soon,.

    Rich

  • Ordinary Expdp GR 11, 2 DB and Impdp to RAC 11 GR 2

    Hello

    I'll have to expdp single database Instance 11.2.0.3 and Impdp to 11.2.0.4 CARS two database nodes.

    Do you know the guy a link with a good material for the research, I can follow as a tutorial?

    One thing I know that it is good to optimize execution is implemented table_exists_action = replace.

    Do you know other things that I have to perform in order to have a good performance?

    Thank you

    KZ

    consider of PARALLEL and the PARALLEL_EXECUTION_MESSAGE_SIZE parameter.

    validation constraints

    check the permissions and directory size

    Read also: data pump Performance

    HTH

    Tobi

  • Accelerate exp/imp or expdp/impdp

    Hello

    Is it possible to speed up and exp/imp or expdp/impdp that already works? Is is possible to speed up a run RMAN backup or restore RMAN process?

    Kind regards

    007

    To accelerate a datapump export market and import you can attach to employment and increase the level of parallelism... impdp attach = job

    I don't know any way to speed up a running RMAN backup.

    To expedite an RMAN restore, you can kill the restoration and re - run using several channels.  The restoration should take up where he left off and can run faster with many channels.  It is relevant only if you have several items from backup.

  • expdp crashes after reaching 99% with - wait for a message unread on the broadcast channel

    Here's what I do:

    I export a complete database of PeopleSoft on Solaris 10 using EXPDP.

    I am performing the task as the SYSTEM user and dumping out standard time and labor in a log file.  The $TODAY is just a variable export stamp date.


    It's my run line:

    expdp PARFILE DUMPFILE=psptst.$TODAY.f%U.dmp USERID = SYSTEM expdp.par = / * logfile = expdp_$ ORACLE_SID. $TODAY.log

    My parfile looks like this:

    DIRECTORY = data_pump_dir

    FULL = Y

    PARALLEL = 8

    CONTENT = ALL

    exclude = pattern: "IN (select username from dba_users where useridentifier < 54)" "

    I'm running the command on the same server where the database is located, I have plenty of space.  The server is a Solaris M5000 with 64 GB of RAM, a lot of CPU power and it is the only database running here.  I also close the database and he grew up in restricted so mode nothing else can connect to it since this export is for the migration to Linux and 11.2.  The current database is 11.1.0.7.5.

    My short for export until that strikes it full to 99% then crashes just for hours, and never ends.  I checked the TOAD jobs and all the worker nodes are ASSETS, but the dump file, the file log and standard file are all NO more.

    Here is the output of STAT when I connect to the use of data pump:

    Export > stat

    Job: SYS_EXPORT_FULL_01

    Operation: EXPORT

    Mode: FULL

    Status: EXECUTION

    Bytes processed: 145,006,844,064

    Completion percentage: 99

    Current parallelism: 8

    Number of job errors: 0

    Empty the file: /ora_exports/psptst/psptst.2013.Oct.14.1814.f%u.dmp

    Empty the file: /ora_exports/psptst/psptst.2013.Oct.14.1814.f01.dmp

    bytes written: 27,062,280,192

    Empty the file: /ora_exports/psptst/psptst.2013.Oct.14.1814.f02.dmp

    bytes written: 23,638,007,808

    Empty the file: /ora_exports/psptst/psptst.2013.Oct.14.1814.f03.dmp

    bytes written: 10,316,234,752

    Empty the file: /ora_exports/psptst/psptst.2013.Oct.14.1814.f04.dmp

    bytes written: 14,783,823,872

    Empty the file: /ora_exports/psptst/psptst.2013.Oct.14.1814.f05.dmp

    bytes written: 22,908,686,336

    Empty the file: /ora_exports/psptst/psptst.2013.Oct.14.1814.f06.dmp

    bytes written: 23,083,012,096

    Empty the file: /ora_exports/psptst/psptst.2013.Oct.14.1814.f07.dmp

    bytes written: 23,221,575,680

    Empty the file: /ora_exports/psptst/psptst.2013.Oct.14.1814.f08.dmp

    bytes written: 57,970,688

    Worker status 1:

    Process name: DW01

    Status: Pending JOBS

    2 worker status:

    Process name: DW02

    Status: Pending JOBS

    Status of worker 3:

    Process name: DW03

    Status: Pending JOBS

    Worker status 4:

    Process name: DW04

    Status: Pending JOBS

    Worker status 5:

    Process name: DW05

    Status: Pending JOBS

    Worker status 6:

    Process name: DW06

    Status: Pending JOBS

    Worker status 7:

    Process name: DW07

    Status: Pending JOBS

    Status of worker 8:

    Process name: DW08

    Status: Pending JOBS

    The status of all the expectations of workers are "wait message unread on the chain."  The said master worker, "the db file sequential read" as he wait status.

    I have tried this with a value of 4 and 16 parallel and get the same result.  I also tried with no parallel set and get the same result after 2 hours it crashes just indefinitely.

    I noticed that once the + treated bytes: 145,006,844,064 hit that number he stops just whatever the parallel level set.  I checked and the MAX_DUMP_FILE_SIZE database is set to unlimited.

    There is no old jobs in the dba_datapump_jobs and the status of my job is RUNNING.


    The alerts log is clean just redo log rolls that I expect.


    This has me scratching my head as it appears that the job runs just forever, but don't go no where.

    With the help of the Support of Oracle solution has proved to be something very simple.

    The statistics of the SYSTEM were stale, once updated the EXPDP stole!

    I ran these DBMS (as SYSDBA) positions:

    EXEC DBMS_STATS. GATHER_FIXED_OBJECTS_STATS (NULL);
    EXEC DBMS_STATS. GATHER_DICTIONARY_STATS;
    EXEC DBMS_STATS. GATHER_SYSTEM_STATS;

    The export not only of finishes, he's even faster than before. It has exported the entire base (450GB) in about 1 hour and 15 minutes (in the past, it took more than 3 hours).  I used a parallel framework 16.

  • very slow expdp for CLOB table

    Hello

    I use Oracle 10.2.0.4 on Windows 64-bit.

    I have a table with a CLOB column and expdp tables takes about 90 minutes to complete (I tested the PARALLEL parameter with expdp and I have found no significant difference with different values Parallels or without this parameter). The size of the array is 21G in WHERE user_segments but full dump file size is 11.9 G. Normally if I export a table of this type of size a lot without any BUSINESS, it takes less than 15 minutes to complete.

    Please help if there is a way to increase the speed of export. Thank you

    > CREATE VIEW that do not include the CLOB & export it

    Can you please explain a little more? How do I receive my CLOB data?. Here I am looking for a solution accelerate export (why it is slow if it involves a CLOB), only not to ignore all data to make the Quick export.

    Salman

  • Dynamically change the parallel work

    Hello

    now, I do pump export data for a data warehouse base,.

    Export has been started and I use parallel = 4, it runs now...

    1. say if I want to increase parallel to 16 level, how could I change dynamically during the execution of the exp?

    (now the exp rate is 5.1/h up to now, do you think this is normal or not...) SA want to change some settings of NFS. I think that increasing the parallel work cannot increase the speed of export, based on my past experience, somehow still feel a bit slower than expected)

    Thank you very much.

    database is 10.2.0.3 on Linux

    Hello
    You can increase the parallelism by joining the session running and using the parallel option

    for example

    Fix expdp user/pass = your_job_name

    Once logged in, you can then tap x parallel to increase parallelism

    When you say 5.1/hr 5.1 what? (GO?)

    See you soon,.
    Harry

Maybe you are looking for