expdp tablespace

Hello

We have database development 11R1...


I wanted to perform an export level tablespace...


When I give the follwing command.

It gives an error...
expdp db/****@orcl 
directory=smf dumpfile=tablesonly.dmp logfile=tablesonly.log  tablespaces=users;

Export: Release 11.1.0.6.0 - Production on Tuesday, 22 March, 2011 16:06:15

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Produc
tion
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "DBO_CIMS_2010_LATEST"."SYS_EXPORT_TABLESPACE_01":  dbo_cims_2010_lates
t/********@orcl directory=smf dumpfile=tablesonly.dmp logfile=tablesonly.log tab
lespaces=users;
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
ORA-31655: no data or metadata objects selected for job
Job "db"."SYS_EXPORT_TABLESPACE_01" completed with 1 error(s)
at 16:06:17
What is the problem here
What else should I include...?

Thanks guys...

That explains the error,

[oracle@edhdr2p16-orcl admin]$ oerr ora 31655
31655, 00000, "no data or metadata objects selected for job"
// *Cause:  After the job parameters and filters were applied,
//          the job specified by the user did not reference any objects.
// *Action: Verify that the mode of the job specified objects to be moved.
//          For command line clients, verify that the INCLUDE, EXCLUDE and
//          CONTENT parameters were correctly set.  For DBMS_DATAPUMP API
//          users, verify that the metadata filters, data filters, and
//          parameters that were supplied on the job were correctly set.
[oracle@edhdr2p16-orcl admin]$ 

And it worked for me without errors,

[oracle@edhdr2p16-orcl admin]$ expdp aman/aman directory=dir1 dumpfile=expdp.dmp tablespaces=EXAMPLE

Export: Release 11.2.0.1.0 - Production on Tue Mar 22 16:16:31 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options
Starting "AMAN"."SYS_EXPORT_TABLESPACE_01":  aman/******** directory=dir1 dumpfile=expdp.dmp tablespaces=EXAMPLE
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 50.68 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/COMMENT
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/TRIGGER
Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/INDEX/DOMAIN_INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/POST_INSTANCE/PROCACT_INSTANCE
Processing object type TABLE_EXPORT/TABLE/POST_INSTANCE/PROCDEPOBJ
. . exported "SH"."CUSTOMERS"                            9.853 MB   55500 rows
. . exported "PM"."ONLINE_MEDIA"                         7.854 MB       9 rows
. . exported "SH"."SUPPLEMENTARY_DEMOGRAPHICS"           697.3 KB    4500 rows
. . exported "OE"."PRODUCT_DESCRIPTIONS"                 2.379 MB    8640 rows
. . exported "SH"."SALES":"SALES_Q4_2001"                2.257 MB   69749 rows
. . exported "SH"."SALES":"SALES_Q1_1999"                2.071 MB   64186 rows
. . exported "SH"."SALES":"SALES_Q3_2001"                2.130 MB   65769 rows
. . exported "SH"."SALES":"SALES_Q1_2000"                2.012 MB   62197 rows
. . exported "SH"."SALES":"SALES_Q1_2001"                1.965 MB   60608 rows
. . exported "SH"."SALES":"SALES_Q2_2001"                2.051 MB   63292 rows
. . exported "SH"."SALES":"SALES_Q3_1999"                2.166 MB   67138 rows
. . exported "SH"."SALES":"SALES_Q4_1999"                2.014 MB   62388 rows
. . exported "SH"."SALES":"SALES_Q2_2000"                1.802 MB   55515 rows
. . exported "SH"."SALES":"SALES_Q3_2000"                1.909 MB   58950 rows
. . exported "SH"."SALES":"SALES_Q4_1998"                1.581 MB   48874 rows
. . exported "SH"."SALES":"SALES_Q4_2000"                1.814 MB   55984 rows
. . exported "SH"."SALES":"SALES_Q2_1999"                1.754 MB   54233 rows
. . exported "SH"."SALES":"SALES_Q1_1998"                1.412 MB   43687 rows
. . exported "SH"."SALES":"SALES_Q3_1998"                1.633 MB   50515 rows
. . exported "PM"."PRINT_MEDIA"                          190.2 KB       4 rows
. . exported "SH"."SALES":"SALES_Q2_1998"                1.160 MB   35758 rows
. . exported "SH"."FWEEK_PSCAT_SALES_MV"                 419.8 KB   11266 rows
. . exported "SH"."PROMOTIONS"                           58.89 KB     503 rows
. . exported "SH"."TIMES"                                380.8 KB    1826 rows
. . exported "OE"."CUSTOMERS"                            77.98 KB     319 rows
. . exported "OE"."WAREHOUSES"                           13.42 KB       9 rows
. . exported "SH"."COSTS":"COSTS_Q4_2001"                278.4 KB    9011 rows
. . exported "PM"."TEXTDOCS_NESTEDTAB"                   87.73 KB      12 rows
. . exported "SH"."COSTS":"COSTS_Q1_1999"                183.5 KB    5884 rows
. . exported "SH"."COSTS":"COSTS_Q1_2001"                227.8 KB    7328 rows
. . exported "SH"."COSTS":"COSTS_Q2_2001"                184.5 KB    5882 rows
. . exported "SH"."COSTS":"COSTS_Q3_2001"                234.4 KB    7545 rows
. . exported "OE"."PRODUCT_INFORMATION"                  72.77 KB     288 rows
. . exported "SH"."COSTS":"COSTS_Q1_1998"                139.5 KB    4411 rows
. . exported "SH"."COSTS":"COSTS_Q1_2000"                120.6 KB    3772 rows
. . exported "SH"."COSTS":"COSTS_Q2_1998"                79.52 KB    2397 rows
. . exported "SH"."COSTS":"COSTS_Q2_1999"                132.5 KB    4179 rows
. . exported "SH"."COSTS":"COSTS_Q2_2000"                119.0 KB    3715 rows
. . exported "SH"."COSTS":"COSTS_Q3_1998"                131.1 KB    4129 rows
. . exported "SH"."COSTS":"COSTS_Q3_1999"                137.3 KB    4336 rows
. . exported "SH"."COSTS":"COSTS_Q3_2000"                151.4 KB    4798 rows
. . exported "SH"."COSTS":"COSTS_Q4_1998"                144.7 KB    4577 rows
. . exported "SH"."COSTS":"COSTS_Q4_1999"                159.0 KB    5060 rows
. . exported "SH"."COSTS":"COSTS_Q4_2000"                160.2 KB    5088 rows
. . exported "HR"."COUNTRIES"                            6.367 KB      25 rows
. . exported "HR"."DEPARTMENTS"                          7.007 KB      27 rows
. . exported "HR"."EMPLOYEES"                            16.81 KB     107 rows
. . exported "HR"."JOBS"                                 6.992 KB      19 rows
. . exported "HR"."JOB_HISTORY"                          7.054 KB      10 rows
. . exported "HR"."LOCATIONS"                            8.273 KB      23 rows
. . exported "HR"."REGIONS"                              5.476 KB       4 rows
. . exported "IX"."AQ$_ORDERS_QUEUETABLE_S"              10.91 KB       4 rows
. . exported "IX"."AQ$_STREAMS_QUEUE_TABLE_S"            11.17 KB       1 rows
. . exported "OE"."INVENTORIES"                          21.67 KB    1112 rows
. . exported "OE"."ORDERS"                               12.39 KB     105 rows
. . exported "OE"."ORDER_ITEMS"                          20.88 KB     665 rows
. . exported "OE"."PROMOTIONS"                             5.5 KB       2 rows
. . exported "SH"."CAL_MONTH_SALES_MV"                   6.312 KB      48 rows
. . exported "SH"."CHANNELS"                              7.25 KB       5 rows
. . exported "SH"."COUNTRIES"                            10.20 KB      23 rows
. . exported "SH"."PRODUCTS"                             26.18 KB      72 rows
. . exported "IX"."AQ$_ORDERS_QUEUETABLE_G"                  0 KB       0 rows
. . exported "IX"."AQ$_ORDERS_QUEUETABLE_H"                  0 KB       0 rows
. . exported "IX"."AQ$_ORDERS_QUEUETABLE_I"                  0 KB       0 rows
. . exported "IX"."AQ$_ORDERS_QUEUETABLE_L"                  0 KB       0 rows
. . exported "IX"."AQ$_ORDERS_QUEUETABLE_T"                  0 KB       0 rows
. . exported "IX"."AQ$_STREAMS_QUEUE_TABLE_C"                0 KB       0 rows
. . exported "IX"."AQ$_STREAMS_QUEUE_TABLE_G"                0 KB       0 rows
. . exported "IX"."AQ$_STREAMS_QUEUE_TABLE_H"                0 KB       0 rows
. . exported "IX"."AQ$_STREAMS_QUEUE_TABLE_I"                0 KB       0 rows
. . exported "IX"."AQ$_STREAMS_QUEUE_TABLE_L"                0 KB       0 rows
. . exported "IX"."AQ$_STREAMS_QUEUE_TABLE_T"                0 KB       0 rows
. . exported "IX"."ORDERS_QUEUETABLE"                        0 KB       0 rows
. . exported "IX"."STREAMS_QUEUE_TABLE"                      0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_1995"                       0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_1996"                       0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_H1_1997"                    0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_H2_1997"                    0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_Q1_2002"                    0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_Q1_2003"                    0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_Q2_2002"                    0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_Q2_2003"                    0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_Q3_2002"                    0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_Q3_2003"                    0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_Q4_2002"                    0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_Q4_2003"                    0 KB       0 rows
. . exported "SH"."SALES":"SALES_1995"                       0 KB       0 rows
. . exported "SH"."SALES":"SALES_1996"                       0 KB       0 rows
. . exported "SH"."SALES":"SALES_H1_1997"                    0 KB       0 rows
. . exported "SH"."SALES":"SALES_H2_1997"                    0 KB       0 rows
. . exported "SH"."SALES":"SALES_Q1_2002"                    0 KB       0 rows
. . exported "SH"."SALES":"SALES_Q1_2003"                    0 KB       0 rows
. . exported "SH"."SALES":"SALES_Q2_2002"                    0 KB       0 rows
. . exported "SH"."SALES":"SALES_Q2_2003"                    0 KB       0 rows
. . exported "SH"."SALES":"SALES_Q3_2002"                    0 KB       0 rows
. . exported "SH"."SALES":"SALES_Q3_2003"                    0 KB       0 rows
. . exported "SH"."SALES":"SALES_Q4_2002"                    0 KB       0 rows
. . exported "SH"."SALES":"SALES_Q4_2003"                    0 KB       0 rows
Master table "AMAN"."SYS_EXPORT_TABLESPACE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for AMAN.SYS_EXPORT_TABLESPACE_01 is:
  /u01/app/oracle/expdp.dmp
Job "AMAN"."SYS_EXPORT_TABLESPACE_01" successfully completed at 16:18:41

HTH
Aman...

Tags: Database

Similar Questions

  • tablespace expdp 10g default ORA-31626 31633 - ORA ORA-06512 ORA-06512 ORA-01647

    Hello

    I have currently a database 10g running on a server, AIX, one of its tablespace (CTSPRODDOC) data files located in two places;

    1 /doc/ctsproddoc.dbf

    2 /image/ctsproddoc_01.dbf

    As he tried to export the tablespace for help after a command;

    Directory of cts/cts expdp = tablespaces dumpfile = CTSPRODDOC.dmp dmpdir1 = CTSPRODDOC

    It fails with following error messages;

    ORA-31626: there is no job

    ORA-31633: could not create the main table 'CTSPRODDOC. SYS_EXPORT_TABLESPACE_05 ".

    ORA-06512: at "SYS." DBMS_SYS_ERROR', line 95

    ORA-06512: at "SYS." "KUPV$ FT", line 863

    ORA-01647: tablespace "CTSPRODDOC" is read-only, cannot allocate space inside

    I was able to export a different tablespace with success of the same database to the same directory. But I'm unable to export CTSPRODDOC.

    Can someone please advise.

    Thank you

    Hem

    Hello

    I suspect that the default tablesapce for the user of the CTS is CTSPRODDOC. You can change this something else? or do some export under a different name that has a different default tablespace?

    See you soon,.

    Harry

  • Export / import tablespace with all objects (data, users, roles)

    Hi, I have a problem or a question to the export of the section / import tablespace.

    On the one hand, I have a database 10g (A) and on the other hand, an 11g database (B).

    At there is a tablespace called PRO.

    Also 3 users:

    PRO_Main - contains the datas - space PRO

    PRO_Users1 with a PRO_UROLE - professional role

    PRO_Users2 with a PRO_UROLE - professional role

    Now, I want to transfer the tablespace set PRO (included users PRO_MAIN, PRO_USER1, PRO_User2 and PRO_UROLE role) from A to B.

    On B, I created the user PRO_Main and the tablespace PRO.

    On A, I run suite statement:

    expdp TABLESPACES PRO_Main/XXX DIRECTORY PRO = DUMPFILE TSpro.dmp LOGFILE = backup_datapump = = TSpro.log

    B:

    Impdp TABLESPACES PRO_Main/XXX DIRECTORY PRO = DUMPFILE TSpro.dmp LOGFILE = backup_datapump = = TSpro.log

    Result:

    The user PRO_Main has been imported with all data.

    But miss me PRO_USER1, PRO_User2 and PRO_UROLE role...


    I guess, I've used wrong settings in my experienced and / or impdp.

    Would be nice, if someone can give me a hint.

    Thanks in advance.

    Best regards
    Frank

    When you perform an export of TABLESPACE mode by simply specifying tablespaces, then everything gets exported are tables and dependent objects. Users, roles, and tablespace definitions themselves don't get exported.

    When you perform a SCHEMA mode export by specifying the schemas, you will get the schema definitions (if the schema running export is privied) and all of the objects that has the schema. The schema is not owner of roles or tablespace definitions.

    In your case, you want to move

    1 patterns - that you have already created 1 on your target database
    2. the roles
    3 all in the storage spaces belonged to several patterns.

    There is not 1 import/export command that will do that. This is how I could do this:

    1. move the schema definitions
    a. you can either create them manually or
    B1. expdp schemas = include = user
    impdp B2 b1 results.

    2 transfer the roles
    complete expdp = include = role...
    don't forget, this will include all the roles. If you want to limit what is exported, use:
    include = role: "in (" ROLE1","ROLE2", etc.).
    impdo roles come to export

    3. move the user information
    a. If you want to move all the objects in the diagram as functions, packages, etc., then you need to use a schema view
    Export
    patterns of username/password expdp = a, b, c...
    b. If you want to move only the objects in these storage spaces, and then use the export of tablespace
    expdp username/password = tbs1 storage spaces, tbs2,...

    c. import the dumpfile generated in step 3
    Impdp username/password...

    I hope this helps.

    Dean

  • DataPump to a single index tablespace sqlfile gives ORA-31655

    Hi, try to use the tablespaces of data pump export functions to create a sqlfile of DDL to create indexes for a table index. When I run this command:

    System expdp tablespaces dumpfile = directory = DATA_PUMP_DIR idx_ts = idx_ts.dmp

    I get the following error:

    ORA-31655: no data or metadata of objects selected for employment

    My "Workaround" (which is not all that good) was therefore proceed as follows:

    the schemaname = expdp system schemas include = index directory = DATA_PUMP_DIR dumpfile = test.dmp

    and then do:

    Directory = DATA_PUMP_DIR dumpfile = test.dmp sqlfile = Impdp system indexes .sql

    which gives a nice script to create ALL the indexes belonging to schemaname. But what is sought is to simply create a script for the IDX_TX tablespace sqlfile.

    The questions are:

    1. is there some other expdp command-line syntax that successfully exports the DDL for this index tablespace only in a dump file?
    2 is there a way (other than a script - I have those) use of data pump succcessfully 'export' an index tablespace only?
    3. Why is this command faililng?

    Thank you! Gil

    create a sqlfile of DDL to create indexes for a table of index

    DBMS_METADATA. GET_DDL can also be used.

    [http://www.morganslibrary.org/reference/dbms_metadata.html]
    [http://www.orafaq.com/node/59]

  • How to create the same storage space in the database of test as in production

    I used the following commands:
    (as of 10 gr 2)
    expdp DIRECTORY = DATA_PUMP_DIR SCHEMAS = EXCLUDE MDLOG = DUMPFILE = mdlogMETADATA.dmp STATISTICS include happy tablespace = metadata_only =

    Then:
    (in 11 GR 2)
    Impdp DIRECTORY = DATA_PUMP_DIR DUMPFILE = mdlogMETADATA.dmp include TABLESPACE sqlfile = c.sql =

    According to the following positions, it should work
    How to find the user name and a name indatapump import tablespace

    [http://www.rampant-books.com/art_nanda_datapump.htm | http://www.rampant-books.com/art_nanda_datapump.htm]

    instead I get
    ORA-39002: invalid operation
    ORA-39168: TABLESPACE object path was not found.

    Now I'm looking for the incompatible options between (10 gr 2) expdp and impdp (11 GR 2)... and if include = TABLESPACE should be replaced by choice...

    During this time... is there something that can tell me if I make mistakes?

    Tanks

    Hello

    The expdp command you listed in your first post may not work. You have exclude and include in the same order.

    Exclude said excluding these items but includes everything.

    Include says include only these objects and exclude everything else.

    If you use the command expdp of your second post, there is no tablespace objects in a schema export. Include = tablespace will only recreate the tablespaces and not objects in storage. If you want to do this, you must delete the schema = MDLOG and add the full = Y. Tablespace definitions are included in a full export.

    What is your ultimate goal? You want just the definitions of tablespace moved? If so

    complete expdp = include = tablespace directory =...

    If you want all the storage space and the objects in these storage spaces, so I think it would be 2 steps:

    complete expdp = include = tablespace directory...
    expdp tablespaces = tbs1, tbs2,... happy = metadata_only...

    You have not need of content = data_only, but you had it on two orders expdp so I guess that's what you wanted, so I added it.

    If you have a different purpose, then publish it and I'll see what I can find.

    Dean

    p.s. If you want to see what items are included in a particular mode you can query sys.datapump_paths. Het_type is the mode

    Full = look het_type DATABASE_EXPORT
    schemas =... Watch het_type SCHEMA_EXPORT
    tables =... Watch het_type TABLE_EXPORT
    tablespaces =... Watch het_type TABLE_EXPORT
    look at transport: het_type TRANSPORTABLE_EXPORT

    Published by: Dean WINS on February 2, 2010 09:18

  • When I do exp (expdp), why use the temporary tablespace?

    Dear

    Imp (impdp) is using temporary tablespace.

    but I'm not using tablespace temp, while exp (expdp) is running.

    I think that exp (expdp) will not use temporary tablespace. is it not?

    Please let me know?

    He could.  It must run queries on the data dictionary.  Some of these applications may require the temp tablespace.

    Hemant K Collette

  • Error exporting schema EXPDP in tablespace does not exist...

    C:\>EXPDP ****/**** DIRECTORY=DATA_PUMP_DIR DUMPFILE=PRODDTA_SCHEMA.DMP LOGFILE=PRODDTA_SCHEMA_EXP.LOG
    
    Export: Release 10.2.0.4.0 - Production on Monday, 20 June, 2011 13:27:56
    
    Copyright (c) 2003, 2007, Oracle.  All rights reserved.
    
    Connected to: Oracle Database 10g Release 10.2.0.4.0 - Production
    ORA-31626: job does not exist
    ORA-31633: unable to create master table "PRODDTA.SYS_EXPORT_SCHEMA_05"
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPV$FT", line 871
    ORA-00959: tablespace 'PRODDTATB' does not exist
    Hi guys,.

    I'm trying to export this particular scheme but apparently he error on this particular tablespace PRODDTATB. This has been used to rearrange the storage space for the temporary storage and was therefore not used and subsequently abandoned. How to find which table or view or other objects, referring to this particular space?

    I already checked the following:

    dba_tables
    dba_indexes
    dba_segments
    dba_lobs
    dba_tablespaces

    USER_TABLES
    USER_INDEXES
    WHERE user_segments
    user_lobs
    USER_TABLESPACES

    Or create this particular tablespace will allow me to export successfully? But before I will recreate I want to find what objects is still referencing this particular table space... Hope someone can point me in the right direction.

    Thank you very much.

    Published by: user6338270 on June 20, 2011 12:13 AM
    Added things I looked...

    views, you can see
    dba_users,
    dba_tables

    I hope that your tablespace has been abandoned, in this case you need to retrieve this particular tablespace.

  • Specify the tablespace for table main expdp?

    Is it possible to specify what the main table tablespace will be created in an expdp job?

    I'm carrying the default tablespace for the user who runs expdp and obviously expdp cannot create the main table because the tablespace is read only for transport. I am looking for a way to specify a different tablespace for the main table without changing the user's default tablespace. Such a thing exists in 10g or 11g? TIA

    Chuck1958 wrote:
    Is it possible to specify what the main table tablespace will be created in an expdp job?

    I'm carrying the default tablespace for the user who runs expdp and obviously expdp cannot create the main table because the tablespace is read only for transport. I am looking for a way to specify a different tablespace for the main table without changing the user's default tablespace. Such a thing exists in 10g or 11g? TIA

    Chuck1958,

    I don't know that such a setting Exit (or not)... On the other hand, here's another way:

    You can create a new user, give him required privileges and, on the other hand, starts the datapump export by this new user using the parameter "schema".

    Best regards

    Gokhan Atil

    -------------------------------------------------------
    If you answer this question, please mark appropriate as correct/useful messages and the thread as closed. Thank you

  • expdp, impdp in 12 c and 10g

    Hi all

    We have oracle 10g release 2 on windows environment. We want to take backup of this database using expdp and import it into oracle 12 c using impdp.

    How to do this?

    I use the following approach

    on 10g database (production):

    expdp complete system/password@orcl directory = y = TEST_DIR dumpfile = fulldb_12_jan_2016.dmp logfile = fulldb_12_jan_2016.log

    on 12 c database (test):

    Impdp complete system/password@orcl directory = y = TEST_DIR dumpfile = fulldb_12_jan_2016.dmp logfile = fulldb_12_jan_2016.log

    the problem is that we need to create storage space and data files. I already create their production before the impdp command.

    second problem is that the user schemas have quota on the tablespace. So I have to create users as well before the impdp? I don't know some of the passwords of the user schema.

    Please guide me

    Thank you.

    Hello

    Since you are doing the full expdp, users and tablesapces will be created for you.

    So no need to creaet them pre.

    Concerning tablespaces, impdp will try to create storage space with the same data files (eg - same location of files).

    So, if you do not have same directory on the destination structure it will fail.

    If the file structure is different you can pre create tablespaces and remap the data files using remap_datafile.

    Users will be created with the same passwords on the destination. It is so no worries.

  • A question pertaining to the Tablespace while performing a Transportable Tablespace migration.

    Hi all

    I'm in the middle of a migration, and currently I am performing a VINE for this. The tablespace should I migrate is 3TO in size.

    I've checked endian, its perfect, the two BONES are similar.

    No violation not found for verification of transport, the tablespace is autonomous, I also did the expdp for the tablespace. After that, I copied all the 3 TB data files and emptying of the export to the destination server.

    Now, here's where I'm confused. The next step is to create appropriate patterns.

    The schema name and a source tablespace is CCF1 and the destination is CCF2 respectively.

    I guess I'll have to create the CCF2 user on the destination server and to do so, first, I have to create tablespace CCF2 so that I can create the user and assign the default tablespace, that means I have to create the tablespace with 3 TB of data files value?

    But what about the data files which I already copied during the source? I need 6 TB of space on the volume to accommodate the two data files. Please correct me if I am wrong somewhere... I'm sure I'm

    The database is the 12 c (no, I do not use pluggable database), OS is RHEL5.

    Hi Indigo;

    Tell in Oracle Technology Network: http://docs.oracle.com/database/121/ADMIN/transport.htm#ADMIN10140

    Now, here's where I'm confused. The next step is to create appropriate patterns.

    The schema name and a source tablespace is CCF1 and the destination is CCF2 respectively.

    I guess I'll have to create the CCF2 user on the destination server and to do so, first, I have to create tablespace CCF2 so that I can create the user and assign the default tablespace, that means I have to create the tablespace with 3 TB of data files value?

    But what about the data files which I already copied during the source? I need 6 TB of space on the volume to accommodate the two data files. Please correct me if I am wrong somewhere... I'm sure I'm

    You are so confused, because if you follow all steps in the Document Oracle you don't have to create storage spaces and users. If you have exported metadata with Data Pump EXPort, you must import the metadata to your target database.

    Follow the link; It will help you.

    Kind regards.

  • Impdp - data not import on only a tablespace

    Windows 32 - bit Oracle 11g (11.2.0.1.0)

    Hello

    I did an import with datapump. All the datas of tablespaces of my old DB are perfectly transferred to my new exept DB for a witch is empty.

    I used: name/password@oldDB expdp DUMPFILE = EXP_DB_FULL. DMP LOGFILE = EXP_DB_FULL. JOURNAL

    Then I created on my new DB all tablespaces that were on the old DB

    I did the datapump with: impdp DIRECTORY DUMPFILE = EXP_DB_FULL dpump_dir1 = system/password@newDB. DMP LOGFILE = EXP_SIVOA_FULL. JOURNAL FULL = Y

    I have no error message, but when I'm loocking tablespaces on Enterprise Manager of them is not complete. I created the table space with the same name and the same data file name.

    On my old DB this tablespace is 13Mo, and after import on my new DB, it is only 1.9 MB. Other tablespaces I created are perfectly complete, they have almost the same size as they were on the old DB.

    I have not understant why this tablespace will not be filled. What coul be the reasons given is not import on only a tablespace without errors?

    I do not know if I give you all the information on my application so tell me if I forgot something.

    Thanks for any information

    It simply means that the object is larger in the old DB bacause it had a free space inside.

    When you have imported, free inside the object has been deleted, so the tablespace is small, because the object is smaller.

    All data is always in the object, and you can count the rows to confirm.

  • parallel expdp on bigfile

    EXPDP even in parallel using just sit for 8 hours of backup a bigfile tablespace...

    RMAN has the ability to use: section size ##G

    that allows parallel on RMAN to work effectively and run the parallel bigfile...

    but I'm on 11.2.0.3 and there is a bug in rman section size that makes sunk...

    only solution is to upgrade which I am unable to do at the moment...

    is there an equivalent section expdp size to allow it to align a bigfile?

    As says ground - things datapump must do series (at least in the versions up to 12 c as far as I know).

    The only way to try to solve this problem is to be

    (a) partition the table source (not a 5 minute job) - then each slave can make a partition of each

    (b) have 8 separate jobs using all queries = xxx (where xx can subset of data) - it's quite messy, but it can be done

    See you soon,.

    Rich

  • Transportable tablespace

    Hello

    Tablespace transportable does means Impdp/Expdp? This includes in Oracle standard edition?

    Thank you.

    Hello

    Oracle Data Pump is also available in Standard edition. Does not allow parallel execution in the standard to apply parallelism to Data Pump, but jobs can be performed.

    Import Transportable Tablespaces is also available in Standard edition.

    For more details see availability features by edition here:

    http://docs.Oracle.com/CD/B28359_01/license.111/b28287/editions.htm#DBLIC116

    Kind regards

    IonutC

  • Tablespace level export, import of schema level - is it possible?

    Hello

    If I have an export level DataPump tablespace (played to = < the tablespaces list > STORAGE space), it is possible to import only tables and dependent objects to a specific schema residing in exported tablespaces? DB version is 11.2.0.3.0. According to the documentation, it should be possible: http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_import.htm#i1011943 : a schema import is specified using the SCHEMAS. Source can be a full, table, tablespace, or schema mode export dump file set or another database.  

    Perform a quick test seems, however, that it is not so:

    (1) source DB - I have two tablespaces (TS1, TS2) and two schemas (USER1, USER2):

    SQL > select nom_segment, nom_tablespace, segment_type, owner

    from dba_segments

    where owner in ("USER1", "User2");

    2 3

    OWNER NOM_SEGMENT SEGMENT_TYPE TABLESPACE_NAME

    ------ --------------- ------------------ ----------------

    USER1 UQ_1 INDEX TS1

    USER1 T2 TABLE TS1

    USER1 T1 TABLE TS1

    USER2 T4 TABLE TS2

    USER2 T3 TABLE TS2

    (2) I am not a tablespace level to export:

    Expdp system directory $ = dp_dir = ts1 tablespaces, ts2 dumpfile = test.dmp

    Export: Release 11.2.0.3.0 - Production on Fri Jul 11 14:02:54 2014

    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

    Password:

    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production

    With partitioning, OLAP, Data Mining and Real Application Testing options

    Start "SYSTEM". "" SYS_EXPORT_TABLESPACE_01 ": System / * Directory = dp_dir tablespaces = ts1, ts2 dumpfile = test.dmp

    Current estimation using BLOCKS method...

    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

    Total estimation using BLOCKS method: 256 KB

    Object type TABLE_EXPORT/TABLE/TABLE processing

    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX

    Object type TABLE_EXPORT/TABLE/CONSTRAINT/treatment

    Object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS of treatment

    . . exported "USER1". "" T1 "5,007 KB 1 lines

    . . exported "USER1". "" T2 "5,007 KB 1 lines

    . . exported "user2". "" T3 "5,007 KB 1 lines

    . . exported "user2". "" T4 "5,007 KB 1 lines

    Main table 'SYSTEM '. "' SYS_EXPORT_TABLESPACE_01 ' properly load/unloaded

    ******************************************************************************

    "(3) I'm trying to import only the objects belonging to User1 and I get the 'ORA-39039: schema '(' USER1')' expression contains no valid schema" error: "

    Impdp system directory $ = dp_dir patterns = USER1 dumpfile = test.dmp

    Import: Release 11.2.0.3.0 - Production on Fri Jul 11 14:05:15 2014

    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

    Password:

    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production

    With partitioning, OLAP, Data Mining and Real Application Testing options

    ORA-31655: no data or metadata of objects selected for employment

    ORA-39039: pattern Expression "(' USER1')" contains no valid schema.

    (4) However, the dump file contains clearly the owner of the tables:

    Impdp system directory = dp_dir dumpfile = test.dmp sqlfile = imp_dump.txt

    excerpt from imp_dump.txt:

    -path of the new object type: TABLE_EXPORT/TABLE/TABLE

    CREATE TABLE "USER1". "" T1 ".

    ("DUMMY" VARCHAR2 (1 BYTE)

    )

    So is it possible to somehow filter the objects belonging to a certain pattern?

    Thanks in advance for any suggestions.

    Swear

    Hi swear,.

    This led a small survey of me I thought I was worthy of a blog in the end...

    Oracle DBA Blog 2.0: A datapump bug or a feature and an obscure workaround

    Initially, I thought that you made a mistake, but that doesn't seem to be the way it behaves. I've included a few possible solutions - see if one of them responds to what you want to do...

    See you soon,.

    Rich

  • using remap_tablespace in impdp but still get ORA-01658 for old tablespace...

    Hi all

    I posted this question in the section export/import/SQL Loader & external Table but couldn't do a work around, so post the question here.

    Version of DB - 10.2.0.4.0

    OS - SuSE Linux Enterprise Server 10 (x86_64)

    I took 1 schema export using expdp and everything went successfully.

    The command used was-

    nohup expdp user_in_dev/user_in_dev dumpfile = user_in_dev_18june_new.dmp logfile = expdp_user_in_dev_18june_new.log directory = dir_user_in_dev VERSION = 10.2.0 STATUS = 60 HAPPY = ALL &

    All the above schema data are present in the DEV_TBLSPACE tablespace that is almost full (98%).

    Now, I'm importing the dumpfile into production under schema user_in_prod. Tablepace DEV_TBLSPACE is also present here and is almost 90% full, so I'm remapping in PROD_TBLSPACE using below command: -.

    nohup impdp user_in_prod/user_in_prod DIRECTORY = dir_user_in_dev = HAPPY user_in_dev_18june_new.dmp DUMPFILE = all = impdp_user_in_prod_18june_new.log remap_schema = user_in_dev:user_in_prod VERSION = 10.2.0 LOGFILE remap_tablespace = DEV_TBLSPACE:PROD_TBLSPACE &

    but I am getting below error: -.

    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production 64-bit

    With partitioning, OLAP, Data Mining and Real Application Testing options

    ORA-31626: there is no job

    ORA-31633: could not create the main table 'USER_IN_PROD. SYS_IMPORT_FULL_05 ".

    ORA-06512: at "SYS." DBMS_SYS_ERROR', line 95

    ORA-06512: at "SYS." "KUPV$ FT", line 871

    ORA-01658: cannot create as INITIAL segment in tablespace DEV_TBLSPACE

    When I use remap_tablespace to change the storage space, it must use PROD_TBLSPACE now which has enough free space available.

    User user_in_prod also has unlimited(-1) PROD_TABLESPACE quota.

    Currently, the user_in_prod schema is empty and is not any object.

    Kindly help.

    P.S. - dev exported tables are not partitioned tables.

    Error messages are self-explanatory, if you want to read and investigate them, which you do not want apparently.

    Fact: the two expdp and impdp create a table that is used to manage expdp and impdp

    Reality: this table belongs to the user who runs the expdp and imdp

    Reality: It is created in the DEFAULT tablespace users.

    So: what's the obvious conclusion?

    The default tablespace for USER_IN_PROD is DEV_TBLSPACE,

    and there is no space even to create this table small impdp.

    And you don't need to post in two forums to reach this conclusion.

    Just a small industry and a little thought.

    However, the only thing most people here know is how to hit CTRL + C and Ctrl + V.

    Oracle is not on rocket science and people who don't want to make any effort to master it, should stay away from him.

    -------------

    Sybrand Bakker

    Senior Oracle DBA

Maybe you are looking for

  • Siri has my fake name

    I just installed Sierra and ben play with Siri.  He thinks I am my wife and I can't find a way to fix. 'My card' Contacts is correct. Any ideas? Thank you Jay

  • Network a wifi in 7391el Dv6

    Hello Frist everything I love hp laptop s a this is my 2nd but now in the nuts will be cause whenever I have stop the tour does not show a reboot again wifi (that's my home network) an i have to run the software again to return the wifi... .i have tr

  • DVD - RAM Satellite A80 unresponsive

    Microsoft xp operating systemsSatellite A80 You have a problem with the carpet * a dvd - ram drive UJ-830 s it does not recognize all of a sudden any dvd or cd-rom.Already checked the latest drivers and still no response.Sometimes I get the message t

  • updated display driver

    HP pavilion dv6 notebook PC Cnf0089gr2 Wa779ua #aba

  • HP Pavilion does not start after upgrading Windows 10

    (Sorry for my bad English)Last night I started to update my HP Pavilion Windows 8.1 to 10, but I have a problem, because today, in the morning, the update was 99%, and it ended, just after I have shut down the computer and is going to school and when