By default, export DataPump is incompatible...

Oh my...

It seemed to me have come to realize that the default Oracle for the DataPump is an inconsistent export.

Recently, I tried refresh our production test environment and get:

ORA-39083: Type as ref_constraint cannot be created with the object error:
ORA-02298: cannot validate (VS2AU. FK_LEDGER_ROW_CALL) - parent key not found

Reference we use Oracle 11.1.0.7 on AIX 6.1

Oracle reference Note: 462645.1 says that we must use FLASHBACK_TIME or FLASHBACK_SCN with the NETWORK_LINK parameter.

Questions I have are:

Does this mean that we MUST use FLASHBACK_RETENTION as having a FLASHBACK_QUERY_AREA put in place?
I hope not, because that has been problematic for the archivelogs and RMAN (for us).

Then, how can I specify in an automated cron script to get the latest SNA or present to pass as a parameter?

Thanks in advance.

close

First - make sure you have all of the point set off, set linesize, echo set, etc. When you run this script file, you will get a file named scn.par that contains something like:

flashback_scn = 527968

It's everything you want in this file. You don't want the title, connected lines, etc. The topic set linesize, echo, etc. which are defined in the script I gave you will create her parfile, as you want.

Then run your job to datapump as:

expdp system / $SYSTÈME.
FULL = Y.
COMPRESSION = ALL.
parfile = SCN.par.
Directory = .
dumpfile = .dmp.
logfile =.log

DataPump and exp/imp classic allow you to set some parameters in a file that is called a parfile. You specify as

parfile = my_par_file.par

In your case, him parfile is called scn.par and it contains a single line, as described above.

Dean

Published by: Dean WINS 14 August 2009 13:42

Tags: Database

Similar Questions

  • Change default export for the whole of the sequence

    I got burned again.

    Someone at - there a way to change to 'whole sequence' default export settings regarding the

    incredibly stupid ' in/out sequence "?

    When you have a lot say 12 long sequences, you can't miss these stupid little in and out of the points.

    I would like to have the default value as the "whole sequence. 999 times out of 1000 I export the entire sequence.

    having to check each time in boring. And I have been burned many times by this feature.

    Avid, FCP6, 7, X all export the entire sequence unless otherwise stated.

    All solutions?

    Hello Avid MCD,

    Someone at - there a way to change to 'whole sequence' default export settings regarding the

    incredibly stupid ' in/out sequence "?

    I would like to have the default value as the "whole sequence. 999 times out of 1000 I export the entire sequence.

    having to check each time in boring. And I have been burned many times by this feature.

    It would be a great feature request. Please be sure that here.

    Thank you

    Kevin

  • Default export settings?

    Is there anyway to set the default export settings? Since I've done the new update, it seems as if first remember more the last setting that I used to export - as I often do 11-12 videos per day is a huge waste of time for me to have continuously to select the same export settings again and again.

    Thanks in advance for any help

    It should be in your user Presets. Top of the presets in the stand alone SOUL window

    If it is not there... the export as a preset then import it so it appears in user Presets.

    Call him so he topped the list in alphanumeric order.

  • Is it possible to set a default export path?

    When you export media, is it possible to set a default export path and the default file name?

    Mandated to research a bit on google and the forum, but did not find specific solutions.

    Which is one of the reasons that it is a good idea to rename clips to match something that you would recognize as a filename suitable for your export.

    However, unfortunately, the answer to your question is "it is not possible to define a default name for the export.

    Heck, we always try to get Adobe to add the path name of the file in the export Panel!

    In any case, my solution was to put everything on a disk separate in one folder, then I end up moving them from there.

  • error when exporting datapump

    Hi gurus,

    Oracle Version: 10.2.0.3
    Operating system: Linux

    Today that we had a problem when exporting datapump runs in the production server, the error is
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 9.126 GB
    Processing object type SCHEMA_EXPORT/USER
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    ORA-39127: unexpected error from call to export_string := SYS.DBMS_RMGR_GROUP_EXPORT.grant_exp(4869,1,...) 
    ORA-06502: PL/SQL: numeric or value error: NULL index table key value
    ORA-06512: at "SYS.DBMS_RMGR_GROUP_EXPORT", line 130
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_METADATA", line 5118
    ORA-39127: unexpected error from call to export_string := SYS.DBMS_RMGR_GROUP_EXPORT.grant_exp(4868,1,...) 
    ORA-06502: PL/SQL: numeric or value error: NULL index table key value
    ORA-06512: at "SYS.DBMS_RMGR_GROUP_EXPORT", line 130
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_METADATA", line 5118
    ORA-39127: unexpected error from call to export_string := SYS.DBMS_RMGR_GROUP_EXPORT.grant_exp(4867,1,...) 
    ORA-06502: PL/SQL: numeric or value error: NULL index table key value
    ORA-06512: at "SYS.DBMS_RMGR_GROUP_EXPORT", line 130
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_METADATA", line 5118
    Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/COMMENT
    Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
    Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
    Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
    Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
    Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
    Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
    Processing object type SCHEMA_EXPORT/VIEW/VIEW
    Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    ORA-39127: unexpected error from call to export_string := SYS.DBMS_RMGR_GROUP_EXPORT.grant_exp(4869,1,...) 
    ORA-06502: PL/SQL: numeric or value error: NULL index table key value
    ORA-06512: at "SYS.DBMS_RMGR_GROUP_EXPORT", line 130
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_METADATA", line 5118
    ORA-39127: unexpected error from call to export_string := SYS.DBMS_RMGR_GROUP_EXPORT.grant_exp(4868,1,...) 
    ORA-06502: PL/SQL: numeric or value error: NULL index table key value
    ORA-06512: at "SYS.DBMS_RMGR_GROUP_EXPORT", line 130
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_METADATA", line 5118
    ORA-39127: unexpected error from call to export_string := SYS.DBMS_RMGR_GROUP_EXPORT.grant_exp(4867,1,...) 
    ORA-06502: PL/SQL: numeric or value error: NULL index table key value
    ORA-06512: at "SYS.DBMS_RMGR_GROUP_EXPORT", line 130
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_METADATA", line 5118
    . . exported "QFUNDAPAYPROD_JUN09"."ST_LO_TRANS"         213.3 MB 2017789 rows
    . . exported "QFUNDAPAYPROD_JUN09"."ST_EXTERNAL_LOG"     712.6 MB 14029747 rows
    . . exported "QFUNDAPAYPROD_JUN09"."ST_LO_MASTER"        198.1 MB  915035 rows
    . . exported "QFUNDAPAYPROD_JUN09"."ST_LO_APPORTIONS"    110.7 MB 3951926 rows
    Can someone help me please it is a serious problem and how to solve it.

    Thank you and best regards,
    Kahina Prasad.S

    SIDDABATHUNI wrote:
    Hi orawiss,

    Here is my order export

    expdp username/password directory=dbdump dumpfile=D_JUN09_$DT.dmp exclude=statistics,grants,PROCOBJ logfile=D_JUN09_$DT.log job_name=QFU
    

    But today morning alone i have added PROCOBJ prameter to exclude it.

    Thank you best regards &,.
    Kahina Prasad.S

    Bug 4358907, as the MOS ID: ORA-39127 use datapump exp [451987.1 ID]

    Try not to use the exclude option

  • Can't export (datapump) a database of Oracle 10 g 2 (10.2.0.3) of 10.2.0.4

    Hello

    Here's our environment (OS, RDBMS) for all our machines to Oracle:

    -Windows Server 2003 SP2 and Windows x 64 enterprise servers
    -Oracle 10g R2, 10.2.0.3 and 10.2.0.4

    I'm trying simply to export a schema (from a Server Windows 64 bits, Oracle 10.2.0.4) that was on a database, Oracle 10.2.0.3 (using Datapump).

    I have no error messages, but nothing happens. Here is the log of the execution of the 'export Datapump ":
    +;;; +

    Export: Release 10.2.0.4.0 - 64 bit Production on Wednesday, September 16, 2009 10:09:54

    Copyright (c) 2003, 2007, Oracle.  All rights reserved.
    +;;; +

    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
    With the options of Real Application Clusters, partitioning, OLAP and Data Mining

    But, I am able to import (always using Datapump) a schema which stood on an Oracle 10.2.0.3 database in the same machine Oracle (64-bit), Oracle 10.2.0.4.

    I was sure that I was able to export (datapump) a database with a lower version than the version of the software that ran from datapump export.

    Any idea?

    Thank you to

    Because the dump file will be server-side in any way why you want to export using 10.2.0.4 remotely on 10.2.0.3.
    Better to use the same version of the utility as a server.

    Moving data between different database Versions
    http://download.Oracle.com/docs/CD/B19306_01/server.102/b14215/dp_overview.htm#CEGFCFFI

  • by default export data pump include options.

    Hi members,
    Can anyone tell if datapump schema export export subsidy system, object of subsidies and privileges of the default schemas. ? Any place where can I find a complete list of default values in the documentation? Thanks in advance.

    If you exported using user DBA role, it should have subsidies, privileges. Try SQLFILE option to generate the DDL and see if these SUBSIDIES and privileges are included.

  • Import with datapump when exporting datapump was executed as user SYS

    Hi all

    all I have is a dumpfile and an export a datapump log file. Export has been executed as user sys:

    Export: Release 11.2.0.1.0 - Production on mid Dec 3 12:02:22 2014

    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    ;;;
    Join EIB: Oracle Database 11 g Release 11.2.0.1.0 - 64 bit Production
    "SYS". "' SYS_EXPORT_FULL_01 ':"sys/***@database AS SYSDBA"directory = data_pump_dir dumpfile = db_full.dmp db_full.log = complete logfile = gestartet wird y
    Mobiltelefondienste participations mit method BLOCKS...
    Objekttyp DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA wird thanks
    Approximately Schatzung mit BLOCKS method: 52.84 GB

    Now I have to import (with datapump) user USER01 and export USER02. But I don't know the name of the source database tablespaces.

    I want to keep the name of the user (USER01/USER02). That's why I created these users in the target database.

    The questions I have are:

    -should I start the import by datapump as user SYS?

    -Should what settings I use to import users USER01, USER02 and data in the target database? Since I do not know the names of tablespaces

    in the source database parameter REMAP_TABLESPACE will not help

    Any help will be appreciated

    J.

    Hi J.

    The questions I have are:

    -should I start the import by datapump as user SYS?

    No, you need to import with a user in the role of "imp_full_database".

    -Should what settings I use to import users USER01, USER02 and data in the target database? Since I don't know the names of the storage spaces in the source database parameter REMAP_TABLESPACE will not help

    Well, an idea is to generate you a schema import sqlfile and see in the ddl in what tablespace, it will try to create objects.

    Impdp------' / as sysdba------' directory = dumpfile = = = USER01, USER02 patterns sqlfile

    For more information, take a look at the import documentation

    http://docs.Oracle.com/CD/B19306_01/server.102/b14215/dp_import.htm

    Hope this helps,

    Kind regards

    Anatoli has.

  • UNC path in default export file blocking software

    So I gave it my all to find a solution to change the default location for my first Pro output when exporting. I know how to do it, but by clicking on the name of output to change it blocks to the solution. My guess is that it is because my default is located on my file server.

    This happens in PP and SOUL. And if I run the export to the default location I get an unknown error.

    How can I get this resolved, the software now is pretty useless.

    Pro first, CC, final version, of course, on a machine Windows 10 Enterprice, latest upgrades of course. The material is Lenovo Yoga 12.

    Advice would be much apreciated.

    Unless your storage network is a San, export to a local drive and the displacement of the exported file and then share.

  • Parallel sessions on Export Datapump (10.2.0.4)

    Hello

    We use Oracle 10.2.0.4 on Solaris and I export a table using Datapump export.

    The export includes a query that selects from three tables based on relevant conditions. Specifies him parfile "parallel = 4' and the dumpfile setting uses % you so that it creates an appropriate number of files."

    When I run the export using my own account (DBA) (i.e. expdp mr_dba parfile = exp_xyz.par) export complete in 15 minutes and creates four dumpfiles. When I run the export as the owner of the schema using the parfile her exact same (i.e. expdp schema_own parfile = exp_xyz.par) export takes more than two hours and does that with two dumpfiles.

    Could someone suggest the things I could look at to find out why there is such a difference in elapsed time? Exports have been executed repeatedly as long as both users with the box with similar loads and the results are pretty consistent that is 15 mins for my user and two hours for the owner of the schema.

    The owner of the schema has a different profile and another group of consumers of resources but also of my profile and the profile of schema owners have 'sessions_per_user' defined on Unlimited. In the resource manager in the Parallel_Degree_Limit_P1 value is set to 16 for my group of consumer and is not at all defined for the consumer of schema owners group.

    I did notice that when exporting under the schema owner of the DBA_DATAPUMP_SESSIONS showed a DBMS_DATAPUMP session, a MASTER session and two WORKING sessions. When I run it under my user name, it shows these four sessions but shows also three sessions TABLE OUTSIDE. This suggests that it is using a different approach, but I don't know that this would cause.

    Any advice would be very welcome. I have not posted specific information on the settings file or the tables that I don't know what information people may require - so if you need the details of anything please let me know.

    Thank you very much.

    Is the scheme which takes more privileged time or not? If she didn't have permission, can you give it and let me know if that makes a difference. I seem to remember hearing something about parallel and not private accounts, but can't seem to find anything about it.

    Thank you

    Dean

  • Export DataPump - how to import user specified?

    Hi gurus,

    I have a big problem. I exported all Oracle instance users using datapump command and right now I have a problem with the import of this new instance.

    Is it possible to import the datapump dump file specified users (not full instance)?

    You can use the option of patterns & remap_schemas

    If you want to import with the same name:

    Impdp directory = directory_name dumpfile = dumpfile_name.dmp logfile = logfile_name.log patterns = schema_name

    If you want to import with a different name:

    Impdp directory = directory_name dumpfile = dumpfile_name.dmp = logfile_name.log patterns = schema_name remap_schema logfile = export_schema:import_schema

    Use may try to import in the different tablespace using remap_tablespace also

    Published by: Riou on July 9, 2010 01:12

  • Export DataPump with the query option

    Hi all

    My environment is IBM AIX, Oracle 10.2.0.4.0 database.

    I need a few sets of records using a query in export production. Request is attached to several tables. Since we have the BLOB data type, we export using datapump.

    We have weaker environments, but have not the same set of data and tables, and therefore not able to simulate the same query in lower environment. But created a small table and faked the query.

    My order is

    expdp system / < pwd > @orcl tables = dump.dump1 query = dump.dump1:' ' where num < 3 ' ' directory = DATA_PUMP_DIR dumpfile = exp_dp.dmp logfile = exp_dp.log

    Query in the command pulls two records directly. By running the command above, I see the size 80KO dump file,
    In the export log file.

    I see Total estimation using BLOCKS method: 64 KB.
    export Dump.Dump1 = 4,921 KB 2 rows.

    My doubts are,
    (1) is the correct command that I am running.
    (2) estimate said 64 k, considering that it says also exported 4,921 KB. But the dump file created is 80KO. It is exported correctly?
    (3) given that I run with the use of the system, apart from 2 rows, export all data. We must send the dump file to the other Department. We should not export all of the data other than the query output.
    (4) in the order if I am not using "tables = dump.dump1), the export file big mess." Don't know which is the right.

    Your answers will be more useful.

    The short answer is 'YES', he did the right thing.

    The long answer is:

    Query in the command pulls two records directly. By running the command above, I see the size 80KO dump file,
    In the export log file.

    I see Total estimation using BLOCKS method: 64 KB.
    export Dump.Dump1 = 4,921 KB 2 rows.

    My doubts are,
    (1) is the correct command that I am running.

    Yes. As long as you query is correct. DataPump will export on the lines that match this query.

    (2) estimate said 64 k, considering that it says also exported 4,921 KB. But the dump file created is 80KO. It is exported correctly?

    Estimate is made using the full picture. Since you specify, he used the method of estimation of block. Basically, how many blocks have been attributed to this table. In your case, I guess it was 80KB.

    (3) given that I run with the use of the system, apart from 2 rows, export all data. We need to send the dump file to other > Department. We should not export all of the data other than the query output.

    I will export all the data, but going to export metadata. It exports the table definition, all indexes on it, all the statistics on tables or indexes, etc. This is why the dump file could be bigger. There is also a 'main' table that describes the export job who gets exproted. This is used by export and import to find what is in the dumpfile, and where in the dumpfile these things are. It is not user data. This table needs to be exported and will take place in the dumpfile.

    (4) in the order if I am not using "tables = dump.dump1), the export file big mess." Don't know which is the right.

    If you only want this table, then you order export is right. If you want to export more, then you need to change your export command. From what you say, it seems that you order is correct.

    If you do not want any expoirted metadata, you can add:

    content = data_only

    at the command line. This will only export the data and when the dumpfile is imported, it must have the table already created.

    Dean

  • Import/Export DataPump for refreshing production ERP test

    Hi all

    We have 11i Oracle applications running on AIX 5.3ML8 production and test. The production is autonomous with the basic 10.2.0.4 and test is RAC with database version 10.2.0.4.

    Now the question is that we intend to update the Test with production every night. We plan to use Datapump Import/Export for that. I just wanted to know from you guys, if any body has a bad experience using this utility with ERP.

    Thank you and best regards,
    Vijay

    Vijay,

    I am sensitive on the use of export/import to me because someone told me that with ERP type database we should not use Export/Import, although he was not able to specify the exact reason for this. Y at - it no problem using Import/Export the ERP database refresh?

    Using import/export is supported with Oracle Apps database (for the database full exp/imp and certain patterns such as custom). The Apps schema, I think that it is not supported due to the dependencies of the object and the integrity constraints. However, you can open an SR to confirm this with the support of the Oracle.

    Kind regards
    Hussein

  • Compression and Export DataPump

    It is possible to compress a Data Export pump?

    Which Version of DB?

    In ORACLE 12 c, it will be possible, but you need ORACLE DB Enterprise Edition with Advanced Compression Option:

    http://docs.Oracle.com/database/121/Sutil/dp_export.htm#SUTIL4051

  • export DataPump

    Hi all experts.

    We are in windows server 2008R2 and our database is 11.2.0.3 into two RAC nodes. I want to update a schema of staging by the same pattern of production. I know I have to export the EXPDP control scheme, and then import it to the staging. Are there other measures that I have to perform... So my measurements are below: Please let me know if I need to save information in putting on stage or doing the steps in the staging prior to importation

    (1) backup staging schema by expdp

    (2) schema export production to the staging X

    (3) schema user X drop of staging

    (4) place the tables, constraints, and the objects of the schema X in staging

    (5) to import a production scheme X to the staging.

    (6) run invalid objects to the schema X in staging

    I have to save all the previlages and grants the X pattern in staging before importing? Please give me your comments. Thank you.

    Hello

    About 3 and 4, if your intermediate username contains objects, you can use the cascade option do to fall, this will also remove his object.

    If you need keep the subsidies and privileges, you can also deposit all intermediary user object and keep the user himself.

    Kind regards

    --

    Bertrand

Maybe you are looking for