Degree parallel data pump
HelloI have an impdp job running a long time and I think it's maybe due to too high degree of parallelism. Currently it takes about a minute for a connection session to create.
Is there a way to reduce this number in the middle of an import? I see the degree DBA_DATAPUMP_JOBS and the corresponding sessions. Can I update this table Data Pump to reduce the number of sessions created by Data Pump? What comes to mind is by updating the column of DEGREE, but it seems risky.
On a related topic, Hat are general rules to decide on a number of parallel processes?
It's about Oracle 11.2.0.3.
Thank you and best regards.
I see the degree DBA_DATAPUMP_JOBS and the corresponding sessions. Can I update this table Data Pump to reduce the number of sessions created by Data Pump? What comes to mind is by updating the column of DEGREE, but it seems risky.
You shouldn't be updated any table data pump.
You can use the command line interface and connect to the session of import do the work.
Impdp-help (see the bottom section)
The following commands are valid in interactive mode. Note: the abbreviations are allowed.
PARALLEL: Change the number of active workers for work in progress.
I have not done, but it seems that the PARALLEL parameter can help you.
Tags: Database
Similar Questions
-
Differences between Data Pump and always by inheritance, import and export
Hi all
I work as a junior in my organization dba and I saw that my organization still uses legacy import and export utility instead of Oracle Data Pump.
I want to convence my manager to change the existing deal with Oracle Data Pump, I have a meeting with them to keep my points and convence them for Oracle utility pump data.
I have a week very convencing power but I won't miss to discover myself, I can't work myself and really a need with differences of strength against import and export, it would be really appreciated if someone can put strong points against Oracle Data pump on legacy import and export.
Thank you
Cabbage
Hello
a other people have already said the main advantage of datapump is performance - it is not just a little more fast exp/imp it is massively faster (especially when combined with parallel).
It is also more flexible (once much more) - it will even create users with exports level schema which imp can't do for you (and it is very annoying that he could never).
It is reusable
It has an api plsql
It supports all types of objects and new features (exp is not - and that alone is probably reason to spend)
There even a 'legacy' at 11.2 mode where most of your old exp parameter file will still work with him - just change exp expdp and impdp print.
The main obstacle to the transition to datapump seems to be all "what do you mean I have to create a directory so that it works", well who and where is my dumpfile why can't it be on my local machine. These are minor things to go well beyond.
I suggest do you some sort of demo with real data of one of your large databases - do a full exp and a full expdp with parallel and show them the runtimes for them to compare...
See you soon,.
Rich
-
ODI LKM Oracle for Oracle Data Pump question
Hi all
I have a weird problem, ODI.
I associate myself with per_all_people_f, fnd_user to load the w_user_ds using Oracle Data Integrator. The used LKM is LKM Oracle for Oracle Data Pump.
Fine when I run the interface. I am getting below error
ODI-1227: task failed USER_DATA_SET (load) on the source of connection ORACLE EBS.
Caused by: java.sql.SQLSyntaxErrorException: ORA-00923: KEYWORD not found where expected
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:462)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:405)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:931)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:481)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:205)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:548)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:217)
at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1115)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1488)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3769)
at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3954)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1539)
at oracle.odi.runtime.agent.execution.sql.SQLCommand.execute(SQLCommand.java:163)
at oracle.odi.runtime.agent.execution.sql.SQLExecutor.execute(SQLExecutor.java:102)
at oracle.odi.runtime.agent.execution.sql.SQLExecutor.execute(SQLExecutor.java:1)
at oracle.odi.runtime.agent.execution.TaskExecutionHandler.handleTask(TaskExecutionHandler.java:50)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.processTask(SnpSessTaskSql.java:2913)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2625)
at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:577)
at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:468)
at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:2128)
to oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$ 2.doAction(StartSessRequestProcessor.java:366)
at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:216)
at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:300)
to oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$ 0 (StartSessRequestProcessor.java:292)
to oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$ StartSessTask.doExecute (StartSessRequestProcessor.java:855)
at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:126)
to oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$ 2.run(DefaultAgentTaskExecutor.java:82)
at java.lang.Thread.run(Thread.java:662)
The generated code is
create the table 780021 X
(
C1_FIRST_NAME,
C2_MID_NAME,
C3_LAST_NAME,
C4_FULL_NAME,
C5_NAME_SUFFIX,
C6_SEX_MF_CODE,
C7_SEX_MF_NAME,
C8_COUNTRY_NAME,
C9_LOGIN,
C10_CREATED_BY_ID,
C11_CHANGED_BY_ID,
C12_CREATED_ON_DT,
C13_CHANGED_ON_DT,
C14_AUX1_CHANGED_ON_DT,
C15_SRC_EFF_TO_DT,
C16_INTEGRATION_ID,
C17_EFFECTIVE_START_DATE
)
(EXTERNAL) ORGANIZATION
TYPE oracle_datapump
Dat_dir default DIRECTORY
LOCATION ("X780021.exp")
)
PARALLEL
in SELECT
ALL_PEOPLE_F.FIRST_NAME,
ALL_PEOPLE_F.MIDDLE_NAMES,
ALL_PEOPLE_F.LAST_NAME,
ALL_PEOPLE_F.FULL_NAME,
ALL_PEOPLE_F.SUFFIX,
ALL_PEOPLE_F.SEX,
ALL_PEOPLE_F.SEX,
ALL_PEOPLE_F.NATIONALITY,
USER. USER_NAME,
ALL_PEOPLE_F.CREATED_BY,
ALL_PEOPLE_F.LAST_UPDATED_BY,
ALL_PEOPLE_F.CREATION_DATE,
ALL_PEOPLE_F.LAST_UPDATE_DATE,
ALL_PEOPLE_F.CREATION_DATE,
ALL_PEOPLE_F.EFFECTIVE_END_DATE,
USER. USER_ID,
ALL_PEOPLE_F.EFFECTIVE_START_DATE
from APPS. FND_USER USER, APPS. PER_ALL_PEOPLE_F ALL_PEOPLE_F
where (1 = 1)
And (ALL_PEOPLE_F.PERSON_ID = USER. EMPLOYEE_ID)
I don't see what is the problem here.
Someone can help me.
Thank you and best regards,
Krishna Prasad
I found the problem, its with the way ODI generated alias for the FND_USER table, by default it produces USER as an alias, which is a keyword from oracle. We just need to rename it to something else, and it worked.
-
Moving database from 1 server to another via Data Pump - some queries
Hello
I am planing to move my database from one windows server to another. Part of the obligation is also to update this database 10g to 11.2.0.3. So I'm combining 2 tasks using the export / import method (via Data Pump) upgrade.
Regarding export / import (which will be a pump full data of the database export) my plan is.
create an empty 11.2.0.3 target database on the server (same number of control files, and redo logs etc.) ready for an import of the source database
Q1. This will create tablespaces SYSTEM and UNDO - I presume the datapump export doesn't include these spaces of storage anyway?
For export, I intend to simulate CONSISTENT = Y using FLASHBACK_TIME = SYSTIMESTAMP
Q2. What is the return of flame characteristics must be active on the source database to use this knowledge should I Flashback Database enabled on the source (as opposed to other flashback features) database?
My target is a virtual server with a single virtual processor
Q3. Is there any point PARALLEL usinng in the import settings file (normally I would fix this number of processors - however in the case of this virtual server, it is actually onely a virtual processor)?
For the import, I intend to use the REMAP_DATAFILE option to change the location of the data files on the target server
Q4. If the import fails before the end, what is the best course of action? for example I just let go of data storage spaces and remake a complete import?
Thank you
JimJim,
I'll take a pass on your questions:
create an empty 11.2.0.3 target database on the server (same number of control files, and redo logs etc.) ready for an import > source database
Q1. This will create tablespaces SYSTEM and UNDO - I presume the datapump export does not include these storage spaces > anyway?The system tablespace is created when you create a database, but the Data Pump will export and try to import. It will fail with tablespace exists. I am sure that the undo tablespace will be also exported and imported. If they are there, then just import will report that they already exist.
For export, I intend to simulate CONSISTENT = Y using FLASHBACK_TIME = SYSTIMESTAMP
Q2. What is the return of flame characteristics must be active on the source database to use this knowledge should I Flashback Database active > on the source (as opposed to other flashback features) database?I know not true about it. I thought that you need just enough cancel, but I hope that others will run in.
My target is a virtual server with a single virtual processor
Q3. Y at - it no PARALLEL point usinng in the import settings file (normally I put this on the number of processors - > however in the case of this virtual server, it is actually onely a virtual processor)?We recommend usually 2 times the number of processes, so 2 parallel should be ok.
For the import, I intend to use the REMAP_DATAFILE option to change the location of the data files on the target server
Q4. If the import fails before the end, what is the best course of action? that is, do I just give up storage of data and redo a > full import?
It depends what is failure. Most of the failures will not stop work, but if this is the case, then most of these jobs may simply be restarted. To restart a job, you just need to know the name of the task, which is printed as you start to export/import, or you name the task in your order Data Pump. To restart, follow these steps:
password/user Impdp attach = job_name
If you do not name the work, the name of the job will be something like
User.sys_import_full_01
Hope that helps - and good luck with your migration.
Dean
-
Data Pump - expdp / question utility impdp
HiAll,
As a basis for learning exercise Data pump, I am trying to export the schema scott and then you want to load the dump file into a test_t diagram in the same database.
-1. from dir object sysdba
CREATE a DIRECTORY dpump_dir1 AS 'C:\output_dir ';
-2. created dir on the operating system as c:/output_dit
-3. run expdp
schemas system/***@orcl expdp = scott DIRECTORY = JOB_NAME = hr DUMPFILE = scott_orcl_nov5.dmp PARALLEL dpump_dir1 = 4
-4. create a schema test_t and allows the dba to it.
-5. impdp execution
Impdp system/***@orcl patterns = DIRECTORY = JOB_NAME = hr DUMPFILE = scott_orcl_nov5.dmp PARALLEL dpump_dir1 test_t = 8
He does not here as ORA39165: test_t schema not found. However, the schema test_t exist.
So, I don't know why she should give to this error. It seems that the Scott schema in the expdp dump file may not be charged to any other schema but only to a schema named scott... Is this good? If Yes, how can I load all objects in the schema scott say to another schema say test_t? It would be useful that you can show the respective expdp and impdp command please.
Thank you very much
KSHello
You must specify remap_schema when you import
do impdp system/***@orcl = dpump_dir1 = scott_orcl_nov5.dmp remap_schema = scott DUMPFILE directory: test_t
See you soon
-
Export the whole (10 GB) using Data Pump utility export base
Hello
I have a requirement that we have to export all of the base (10 GB) using Data Pump export utility because it is not possible to send the discharge of 10 GB in a CD/DVD for the seller of our application system (to analyze a few problems that we have).
Now when I checked online full export is available but not able to understand how it works, as we never used this data pump utility, we use the method of normal export. In addition, the data pump will reduce the size of the dump file so it can fit on a DVD or can we use utility export parallel DB full to split the files and include them in a DVD, it is possible.
Please correct me if I am wrong and kindly help.
Thank you for your help in advance.Pravin,
The server saves files in the directory object that you specify on the command line. So what you want to do is:
1. from your operating system, find an existing directory or create a new directory. In case you, C:/Dump is as good a place as any.
2. connect to sqlplus and create the directory object. Just use the path. I use linux, so my directory looks like/scratch/xxx/yyy
If you use Windows, the path to your directory would look like C:/Dump you should not attack.3. don't forget to grant access to this directory. You can grant access to a single user, group of users, or the public. Just like
any other object.If it helps, or if she has answered your question, please mark messages with the appropriate tag.
Thank you
Dean
-
Data pump - export without data
To export the database without data in old tool exp was the parameter ROWS defined as N. How to import the schema of database without data using data pump technology?You can see by checking using dump export on your command line like this
C:\Documents and Settings\nupneja>expdp -help Export: Release 10.2.0.1.0 - Production on Friday, 09 April, 2010 18:06:09 Copyright (c) 2003, 2005, Oracle. All rights reserved. The Data Pump export utility provides a mechanism for transferring data objects between Oracle databases. The utility is invoked with the following command: Example: expdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp You can control how Export runs by entering the 'expdp' command followed by various parameters. To specify parameters, you use keywords: Format: expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN) Example: expdp scott/tiger DUMPFILE=scott.dmp DIRECTORY=dmpdir SCHEMAS=scott or TABLES=(T1:P1,T1:P2), if T1 is partitioned table USERID must be the first parameter on the command line. Keyword Description (Default) ------------------------------------------------------------------------------ ATTACH Attach to existing job, e.g. ATTACH [=job name]. COMPRESSION Reduce size of dumpfile contents where valid keyword values are: (METADATA_ONLY) and NONE. *CONTENT* Specifies data to unload where the valid keywords are: (ALL), DATA_ONLY, and METADATA_ONLY. DIRECTORY Directory object to be used for dumpfiles and logfiles. DUMPFILE List of destination dump files (expdat.dmp), e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp. ENCRYPTION_PASSWORD Password key for creating encrypted column data. ESTIMATE Calculate job estimates where the valid keywords are: (BLOCKS) and STATISTICS. ESTIMATE_ONLY Calculate job estimates without performing the export. EXCLUDE Exclude specific object types, e.g. EXCLUDE=TABLE:EMP. FILESIZE Specify the size of each dumpfile in units of bytes. FLASHBACK_SCN SCN used to set session snapshot back to. FLASHBACK_TIME Time used to get the SCN closest to the specified time. FULL Export entire database (N). HELP Display Help messages (N). INCLUDE Include specific object types, e.g. INCLUDE=TABLE_DATA. JOB_NAME Name of export job to create. LOGFILE Log file name (export.log). NETWORK_LINK Name of remote database link to the source system. NOLOGFILE Do not write logfile (N). PARALLEL Change the number of active workers for current job. PARFILE Specify parameter file. QUERY Predicate clause used to export a subset of a table. SAMPLE Percentage of data to be exported; SCHEMAS List of schemas to export (login schema). STATUS Frequency (secs) job status is to be monitored where the default (0) will show new status when available. TABLES Identifies a list of tables to export - one schema only. TABLESPACES Identifies a list of tablespaces to export. TRANSPORT_FULL_CHECK Verify storage segments of all tables (N). TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded. VERSION Version of objects to export where valid keywords are: (COMPATIBLE), LATEST, or any valid database version. The following commands are valid while in interactive mode. Note: abbreviations are allowed Command Description ------------------------------------------------------------------------------ ADD_FILE Add dumpfile to dumpfile set. CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle. EXIT_CLIENT Quit client session and leave job running. FILESIZE Default filesize (bytes) for subsequent ADD_FILE commands. HELP Summarize interactive commands. KILL_JOB Detach and delete job. PARALLEL Change the number of active workers for current job. PARALLEL=
. START_JOB Start/resume current job. STATUS Frequency (secs) job status is to be monitored where the default (0) will show new status when available. STATUS[=interval] STOP_JOB Orderly shutdown of job execution and exits the client. STOP_JOB=IMMEDIATE performs an immediate shutdown of the Data Pump job. C:\Documents and Settings\nupneja> Content to the "metadata_only" parameter will export only the structure of the schema to skip the lines.
-
Oracle 11 g 2 Standard Edition Data Pump
Hello
I understand that Data Pump is fully supported in Oracle 11 g 2, Standard Edition, and the one feature that is not included is "parallel." According to the http://download.oracle.com/docs/cd/B28359_01/license.111/b28287/editions.htm, I noticed that for itself, Flashback Database is "N".
That being the case, anyone know if I can use expdp with the parameter 'FLASHBACK_TIME' or 'FLASHACBK_SCN' with Oracle 11 g 2 SE? We are a boutique EE here so I will dl/install SE and test it in-house, but if someone has tried already and you know the answer, it will save me a lot of time.
Thanks in advance.Yes, you can use flashback_time and flashback_scn in itself. It is similar to using a query flashback that is supported and self-esteem (SE1).
Here is an example of flashback_time, but 10 gr 2 (don't have 11 GR 2 run immediately)
Connected to: Oracle Database 10g Release 10.2.0.4.0 - Production Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** dumpfile=dptest_fb.dmp schemas=sample include=table:" = 'CLASS'" flashback_time='2010-03-02:11:30:00' Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 128 KB Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/TRIGGER Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS . . exported "SAMPLE"."CLASS" 87.64 KB 696 rows Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
-
"Resume" a Data Pump import Execution failure
First of all, here is a "failure" Data Pump Import I want to "resume":
] $ impdp directory dumpfile=mySchema%U.dmp "" / as sysdba "" = DATA_PUMP_DIR logfile = Import.log patterns parallel = MYSCHEMA = 2
As you can see this work is not without treatment statistics, constraints, etc. from PL/SQL. I want to do is make another impdp command but ignore objects that were imported with success as shown above. Is it possible to do (using the EXCLUDE parameter maybe?) with impdp? If so, what would it be?
Import: Release 10.2.0.3.0 - 64 bit Production Tuesday, February 16, 2010 14:35:15
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64 bit Production
With the options of Real Application Clusters, partitioning, OLAP and Data Mining
Table main 'SYS '. "' SYS_IMPORT_SCHEMA_01 ' properly load/unloaded
Departure 'SYS '. "' SYS_IMPORT_SCHEMA_01 ': ' / * AS SYSDBA"dumpfile=mySchema%U.dmp directory = DATA_PUMP_DIR logfile = Import.log patterns parallel = MYSCHEMA = 2
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Object type SCHEMA_EXPORT/SYNONYM/SYNONYM of treatment
Object type SCHEMA_EXPORT/TYPE/TYPE_SPEC of treatment
Object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE of treatment
Processing object type SCHEMA_EXPORT/SEQUENCE/EXCHANGE/OWNER_GRANT/OBJECT_GRANT
Object type SCHEMA_EXPORT/TABLE/TABLE processing
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "MYSCHE"...
...
... 0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/SCHOLARSHIP/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
ORA-39126: worker unexpected fatal worker error of $ MAIN. LOAD_METADATA [INDEX: "MONSCHEMA".] ["" OBJECT_RELATION_I2 "]
SELECT process_order, flags, xml_clob, NVL (dump_fileid,: 1), NVL (dump_position,: 2), dump_length, dump_allocation, constituent, object_row, object_schema, object_long_name, processing_status, processing_state, base_object_type, base_object_schema, base_object_name, goods, size_estimate, in_progress 'SYS '. "" SYS_IMPORT_SCHEMA_01 "process_order where: 3: 4 AND processing_state <>: 5 AND double = 0 ORDER BY process_order
ORA-06502: PL/SQL: digital error or value
ORA-06512: at "SYS." "MAIN$ WORKER", line 12280
ORA-12801: error reported in the parallel query P001, instance pace2.capitolindemnity.com:bondacc2 Server (2)
ORA-30032: the suspended order (trade-in) has expired
ORA-01652: unable to extend segment temp of 128 in tablespace OBJECT_INDX
ORA-06512: at "SYS." DBMS_SYS_ERROR', line 105
ORA-06512: at "SYS." "MAIN$ WORKER", line 6272
-PL/SQL call stack-
the line object
serial number of handle
package body 14916 0x1f9ac8d50 SYS. MAIN$ WORKER
body 0x1f9ac8d50 6293 package SYS. MAIN$ WORKER
body 0x1f9ac8d50 3511 package SYS. MAIN$ WORKER
body 0x1f9ac8d50 6882 package SYS. MAIN$ WORKER
body 0x1f9ac8d50 1259 package SYS. MAIN$ WORKER
0x1f8431598 anonymous block 2
Job 'SYS '. "' SYS_IMPORT_SCHEMA_01 ' stopped because of the fatal 23:05:45
Thank you
JohnWhy you just restart this work. It ignores everything that has already been imported.
Impdp "" / as sysdba "" attach = "SYS." "" SYS_IMPORT_SCHEMA_01 ".
Import > keep
Dean
-
oracle11g oracle introduced the new feature called oracle data dump. What is the difference in comparing utility imp/exp? can someone explain how to use this new feature
for example, by using the user scott to export some tables in my diagram.Data pump is very fast, parallel importation is made in 11g. exp/imp sometimes fails due to problems of space on the server. but by using method of parallel import is performed. parallel importation is nothing but imported when 1 row have exported the corresponding row in the table, thus saving disk space. With Data Pump Import, a single stream of data load is about 15 to 45 times faster than the initial import.
-
Hello
I use
patterns of expdp dumpfile = 31082015.dmp system/ar8mswin1256@nt11g = dbo_mobile_webresults_test
and I face this error:
DEU-00018: customer data pump is incompatible with the version of database 11.01.00.07.00
I think it's a problem of version.
I found that the database on the server that I am connected 11.2.0.1.0 - 64 bit
and my client is 11.1.0.7.0
I tried it on another pc and it worked.
Thank you very much
-
With the help of Data Pump for Migration
Hi all
Version of database - 11.2.0.3
RHEL 6
Size of the DB - 150 GB
I have to run a Migration of database from one server to another (AIX for Linux), we will use the data pump Option, we will migrate from Source to the target using schemas expdp option (5 patterns will be exported and imported on the target Machine). But it won't go live for target, after that the development of this migration team will do a job on this machine target which will take 2 days for them to fill in and to cultivate these 2 days, source database will run as production.
Now, I have obligation which, after the development team, complete their work that I have to the changes of 2 days of source to target after what target will act as production.
I want to know what options are available in Data Pump can I use to do this.
Kind regards
No business will update something that has some data that are no longer representative of live.
Sounds like a normal upgrade, but you test it just on a copy of the direct - make sure that the process works & then play it comfortable once again, but against your last set of timely production data.
Deans suggestion is fine, but rather than dropping coins and importation, personally I tend to keep things simple and do it all (datapump full schema if it is possible). Live in this way, you know that you put off, inclusive of all sequences and objects (sequences could be incremented, so you must re-create the fall /). Otherwise you are dividing a upgrade in stages, several measures to trace & more to examine potential conflicts. Even if they are simple, a full datapump would be preferable. Simple is always best with production data
Also - you do not know the changes that have been made to upgrade the new environment... so you roll this back etc? Useful to look at. Most of the migration would be a db via RMAN copy / transport-endian, as you must also make sure that you inherit all the subsidies system patterns, not only summary level.
-
migration from 10g to 12 c using the data pump in
Hi, while I used the data pump at the level of the schema before, I'm relatively new to the full database import.
We are trying a full database migration to 10.2.0.4 to 12 c using the complete method of database data pump over db link.
the DBA has indicated to avoid move SYSAUX and SYSTEM objects. but initially during the documentation review, it appeared that these objects are not exported since the TRANSPORTABLE given target = system NEVER. If anyone can confirm this? done import and export log refers to the objects I thought would not:
...
19:41:11.684 23 FEBRUARY 15: Estimated TABLE_DATA 3718 objects in 77 seconds 19:41:12.450 23 February 15: total estimation using BLOCKS method: 52,93 GB
19:41:14.058 23 February 15: object DATABASE_EXPORT/TABLESPACE of treatment type
20:10:33.185 23 February 15: ORA-31684: TABLESPACE object type: 'UNDOTBS1' already exists
20:10:33.185 23 February 15: ORA-31684: TABLESPACE object type: 'SYSAUX' already exists
20:10:33.185 23 February 15: ORA-31684: TABLESPACE object type: 'TEMP' already exists
20:10:33.185 23 February 15: ORA-31684: TABLESPACE object type: 'USERS' existing
20:10:33.200 23 FEBRUARY 15: 96 objects TABLESPACE finished in 1759 seconds 20:10:33.208 23 February 15: treatment of type of object DATABASE_EXPORT/PROFILE
20:10:33.445 23 FEBRUARY 15: 7 PROFILE items finished in 1 seconds 20:10:33.453 23 February 15: treatment of DATABASE_EXPORT/SYS_USER/USER object type
20:10:33.842 23 FEBRUARY 15: 1 USER objects ended in 0 seconds 20:10:33.852 23 February 15: treatment of DATABASE_EXPORT/SCHEMA/USER object type
20:10:52.368 23 February 15: ORA-31684: USER object type: 'OUTLN' already exists
20:10:52.368 23 February 15: ORA-31684: USER object type: 'ANONYMOUS' already exists
20:10:52.368 23 February 15: ORA-31684: USER object type: 'OLAPSYS' already exists
20:10:52.368 23 February 15: ORA-31684: USER object type: 'MDDATA' already exists
20:10:52.368 23 February 15: ORA-31684: USER object type: 'SCOTT' already exists
20:10:52.368 23 February 15: ORA-31684: USER object type: 'LLTEST' already exists
20:10:52.372 23 FEBRUARY 15: Finished objects USER 1140 in 19 seconds 20:10:52.375 23 February 15: object DATABASE_EXPORT/ROLE of treatment type
20:10:55.255 23 February 15: ORA-31684: object ROLE type: 'SELECT_CATALOG_ROLE' already exists
20:10:55.255 23 February 15: ORA-31684: object ROLE type: 'EXECUTE_CATALOG_ROLE' already exists
20:10:55.255 23 February 15: ORA-31684: object ROLE type: 'DELETE_CATALOG_ROLE' already exists
20:10:55.256 23 February 15: ORA-31684: object ROLE type: 'RECOVERY_CATALOG_OWNER' already exists
...
the most insight.
The schema SYS, CTXSYS and MDSYS ORDSYS are not exported using exp/expdp
DOC - ID: Note: 228482.1
I guess that he has already installed a 12 c software and created an itseems database - so when you have imported you have this "already exists."
Every time the database is created and the software installed by default system, sys, sysaux will be created.
-
Hello Forum,
I have a question regarding imports and exports of data pump, perhaps what I should already know.
I need to empty a table that has about 200 million lines, that I need to get rid of about three quarters of data.
My intention is to use the data pump to export the table and and indexes and constraints etc..
The table has no relationship to any other table, it is composed of approximately 8 columns with constraints not null.
My plan is
1 truncate table
2. disable or remove index
3 leave the constraints in place?
4. use data pump to import a lines to keep.
My question
will be my clues and imported too much constraints I want to import only a subset of my exported table?
or
If I dropped the table after truncation, I'll be able to import my table and indexes, even if I use the sub as part of my statement to import query functionality?
My table using the query sub in data pump functionality must exist in the database before doing the import
or handful of data pump import as usual IE will create table indices grants and statistics etc.?
Thank you for your comments.
Concerning
Your approach is ineffective.
What you need to do is to
create the table in select foo * bar where the
bar of truncate table;
Insert / * + APPEND * / into select bar * foo.
Rebuild the indexes on the table.
Fact.
This whole thing with expdp and impdp is only a waste of resources. My approach generates redo a minimum.
----------
Sybrand Bakker
Senior Oracle DBA
-
Data Pump: export/import tables in different schemas
Hi all
I use Oracle 11.2 and I would like to use the data pump to export / import data into the tables between the different schemas. The table already exists in the source and target schemas. Here are the commands that I use:
Script working export:expdp scott/tiger@db12 schema = source include = TABLE:------"IN (\'TB_TEST1 ', \'TB_ABC')\ 'directory = datapump_dir dumpfile = test.dump logfile = test_exp.log
Script to import all the desks:
Impdp scott/tiger@db12 remap_schemas = rΘpertoire source: target = datapump_dir dumpfile = test.dump logfile = test_imp.log content = data_only table_exists_action = truncate
Script error for some tables to import:
Impdp scott/tiger@db12 remap_schemas = source: target include = TABLE:------"IN (\'TB_TEST1')\' directory = datapump_dir dumpfile = test.dump logfile = test_imp.log content = data_only
Export is good, I got the folling error when I try to import the table TB_TEST1 only: "ORA-31655: no metadata data_or objectsz selected for employment. The user scott is a DBA role, and it works fine when I try to import all tables being export without the include clause.
It is possble to import some tables but not all tables in the export file?
Thanks for the help!
942572 wrote:
It's good to import all tables exported through Scott, who do NOT have the clause 'include '.
I have the error only when I try to import some tables with clause "include".
Can I import only some table since the export dump file? Thank you!
you use INCLUDE incorrectly!
do below yourself
Impdp help = yes
INCLUDE
Include the specific object types.
For example, INCLUDE TABLE_DATA =.
Maybe you are looking for
-
Privacy monitoring and disconnect extensions
I'm very happy with the new firefox feature blocks based on the blacklist site tracking cookies of disconnection.What happens to all aready usage disconnect for the same?, I notice a slowdown after the function is activated, it is perhaps for firefox
-
I can't stop adds popping up. The settings changed as you advise.
The adds are not pop up on google
-
I can't stick the image here, but I always go with a box of always visible bookmark in my browser window. Recently, but only rarely, when I choose a new bookmark, the site will appear in the window in the sidebar instead of the main window. If I hit
-
to create a user account, but the computer repeat myself that I have to uninstall what is called the client service for netware. Ive tried everything, I don't know how to uninstall it. can someone please? Ive tried to add and remove programs and is n
-
Windows 7 x 64-network drives mapped disappeared following updates
Since this morning, all network drives mapped our Organization have disappeared. More than 85 users being affected. By a process of elimination, we have reduced the problem to local machines running Windows 7 x 64. We have also several PCs 10 Wind