Large table must be moved schema

Hi guru Oracle

IM using AIX 5.3 with oracle 11.2.0.2

I just finished a job where I took a LOW partition of a partitioned table on our production database and installs all the data table into its own not partitioned our database archive.  This table on Archive is 1.5 to.

Just to give you a brief overview of what I've done, successfully:

-Created a partition no table on Production with exactly the same structure as the partitioned table set apart from the partitions.  Then, I moved all the subject segments in the same tablespace as the table to make it TRANSPORTABLE.

-J' then took an expdp of the metadata tables by using the transport_tablespaces parameter.

-Take the tablespace to read write I used the cp command to transfer the data files to the new directory.

-Then on the database to ARCHIVE, I used impdp to import metadata and direct them to new files.

parfile =

DIRECTORY = DATA_PUMP_DIR

DUMPFILE = dumpfile.dmp

LOGFILE logfile.log =

REMAP_SCHEMA = schema1:schema2

DB11G = "xx", "xx", "xx"...

My problem now is that there is some confusion and I traced the wrong schema, this isn't a major problem, but I wouldn't like to groom, so instead of saying REMAP_SCHEMA = schema1:schema2 must be REMAP_SCHEMA = schema1:schema3

To the question: what is the best way to occupy the table schema3 (1.5 to in scheam2).


To the question: what is the best way to occupy the table schema3 (1.5 to in scheam2).

The easiest way is to use EXCHANGE PARTITTION to just 'swap' in the segment.

You can only 'swap' between a partitioned table and a non-partitioned table, and you already have a non-partitioned table.

So, create a table that is partitioned with a score in the new schema and table not partitioned in the same pattern again.

Then exchange the old schema with the partition table in the new schema. Then share this new partition with the new table not partitioned in the new scheme. No data should move at all - just a data dictionary operation.

Using patterns HR and SCOTT. Suppose you have a copy of the EMP table in SCOTT that you want to move to HR.

- as SCOTT
grant all on emp_copy HR

- as HR

CREATE TABLE EMP_COPY AS SELECT * FROM SCOTT. EMP_COPY WHERE 1 = 0;

-create a partitioned temp table with the same structure as the table
CREATE TABLE EMP_COPY_PART
PARTITION OF RANGE (empno)
(partition ALL_DATA values less than (MAXVALUE)
)
AS SELECT * FROM EMP_COPY;

-the Bazaar in the segment of the table - very fast
ALTER TABLE EXCHANGE PARTITION ALL_DATA EMP_COPY_PART TABLE SCOTT. EMP_COPY;

-now share again in the target table
ALTER TABLE EXCHANGE PARTITION ALL_DATA WITH TABLE EMP_COPY EMP_COPY_PART;

Tags: Database

Similar Questions

  • Moved to subpartitions in a table doubles in another schema.

    + NOTE: I asked this question on the forum SQL and PL/SQL , but came here as I think it is more appropriate to this forum. I placed a pointer to this post on the original post. +

    Hello ladies and gentlemen.

    We are currently involved in an exercise on my place of work where we're trying to logically organize our data by world region. For more information, our database of production is currently at version 10.2.0.3 and will soon become 10.2.0.5.

    For the moment, all our data "lives" in the same schema. We are trying to produce a proof of concept to migrate this data to the identically structured (and named) tables in separate database schemas. Each schema to represent a world region.

    In our current schema, our data are partitioned to date range and then list partitioned on a column named OFFICE. I want to spend the subpartitions of OFFICE of a schema in a same table named and structured in a new scheme. The tablespace will remain the same for the two tables with the same name in the two schemas.

    Any of you have an opinion on the best way to do it? Ideally in the new scheme, I would like to create each table an empty array with the appropriate range and list partitions that are defined. I did a few tests in our development environment with the EXCHANGE PARTITION statement, but this needs to be non partitioned, the destination table.

    I was wondering if, for the migration of partition in all drawings bearing the name of the table and tablespace remains constant, there is a formal method of 'best practices' to accomplish such a move subpartition cleanly, quickly and elegantly?

    Welcome any helpful response.

    See you soon.

    James

    You CAN exchange a subpartition in another table by using a 'temporary' (intermediate) as an intermediate table.

    See:

    SQL> drop table part_subpart purge;
    
    Table dropped.
    
    SQL> drop table NEW_part_subpart purge;
    
    Table dropped.
    
    SQL> drop table STG_part_subpart purge;
    
    Table dropped.
    
    SQL>
    SQL> create table part_subpart(col_1  number not null, col_2 varchar2(30))
      2  partition by range (col_1) subpartition by list (col_2)
      3  (
      4  partition p_1 values less than (10) (subpartition p_1_s_1 values ('A'), subpartition p_1_s_2 values ('B'), subpartition p_1_s_3 values ('C'))
      5  ,
      6  partition p_2 values less than (20) (subpartition p_2_s_1 values ('A'), subpartition p_2_s_2 values ('B'), subpartition p_2_s_3 values ('C'))
      7  )
      8  /
    
    Table created.
    
    SQL>
    SQL> create index part_subpart_ndx on part_subpart(col_1) local;
    
    Index created.
    
    SQL>
    SQL>
    SQL> insert into part_subpart values (1,'A');
    
    1 row created.
    
    SQL> insert into part_subpart values (2,'A');
    
    1 row created.
    
    SQL> insert into part_subpart values (2,'B');
    
    1 row created.
    
    SQL> insert into part_subpart values (2,'B');
    
    1 row created.
    
    SQL> insert into part_subpart values (2,'C');
    
    1 row created.
    
    SQL> insert into part_subpart values (11,'A');
    
    1 row created.
    
    SQL> insert into part_subpart values (11,'C');
    
    1 row created.
    
    SQL>
    SQL> commit;
    
    Commit complete.
    
    SQL>
    SQL> create table NEW_part_subpart(col_1  number not null, col_2 varchar2(30))
      2  partition by range (col_1) subpartition by list (col_2)
      3  (
      4  partition n_p_1 values less than (10) (subpartition n_p_1_s_1 values ('A'), subpartition n_p_1_s_2 values ('B'), subpartition n_p_1_s_3 values ('C'))
      5  ,
      6  partition n_p_2 values less than (20) (subpartition n_p_2_s_1 values ('A'), subpartition n_p_2_s_2 values ('B'), subpartition n_p_2_s_3 values ('C'))
      7  )
      8  /
    
    Table created.
    
    SQL>
    SQL> create table STG_part_subpart(col_1  number not null, col_2 varchar2(30))
      2  /
    
    Table created.
    
    SQL>
    SQL> -- ensure that the Staging table is empty
    SQL> truncate table STG_part_subpart;
    
    Table truncated.
    
    SQL> -- exchanging a subpart out of part_subpart
    SQL> alter table part_subpart exchange subpartition
      2  p_2_s_1 with table STG_part_subpart;
    
    Table altered.
    
    SQL> -- exchanging the subpart into NEW_part_subpart
    SQL> alter table NEW_part_subpart exchange subpartition
      2  n_p_2_s_1 with table STG_part_subpart;
    
    Table altered.
    
    SQL>
    SQL>
    SQL> select * from NEW_part_subpart subpartition (n_p_2_s_1);
    
         COL_1 COL_2
    ---------- ------------------------------
            11 A
    
    SQL>
    SQL> select * from part_subpart subpartition (p_2_s_1);
    
    no rows selected
    
    SQL>
    

    I traded subpartition p_2_s_1 out of the part_subpart table in the NEW_part_subpart table - even with a different name for the subpartition (n_p_2_s_1) If you wish.

    NOTE: because the source and target tables are in different schemas, you need to move (or copy) the intermediate table STG_part_subpart from the schema of the first towards the second scheme after that first "Exchange subpartition' is done. You will need to do this for each subpartition to Exchange.

    Hemant K Collette

    Published by: Hemant K deal April 4, 2011 10:19
    Added details for cross-schema exchange.

  • How to collect statistics on the table for tables in a different schema

    Hi all

    I have a table in a schema, and I want to collect statistics for the table in a different schema.

    I gave GRANT ALL ON SCHEMA1. T1 TO SCHEMA2;

    And when I tried to run the command to collect statistics to help

    DBMS_STATS. GATHER_TABLE_STATS (OWNNAME = > 'SCHMEA1', TABNAME = > 'T1');

    The function will fail.

    Is there a way we can collect statistics of the table for tables in a schema into another schema.

    Thank you
    MK.

    You must grant analyze to schema2.

    SY.

  • Collect table statistics take longer for large tables

    Version: 11.2

    I noticed brings his stats (via dbms_stats.gather_table_stats) takes more time for the large tables.

    As number of rows must be calculated, collection of statistics on the table a big would naturally be a little longer (running COUNT (*) SELECT internal).
    But for a table not partitioned with 3 million lines, it took 12 minutes to retrieve the stats? Outside the County info and the row index what other information is collected for the table stats gather?

    Size of the Table is actually important for the collection of statistics?

    USER_TABLES DESC

    and also

    USER_IND_STATISTICS
    USER_PART_COL_STATISTICS
    USER_SUBPART_COL_STATISTICS
    USER_TAB_COL_STATISTICS
    USER_TAB_STATISTICS
    USER_TAB_STATS_HISTORY
    USER_USTATS
    USER_TAB_HISTOGRAMS
    USER_PART_HISTOGRAMS
    USER_SUBPART_HISTOGRAMS

  • change the column type of data in large tables

    Hello

    I have a very large table 3 TB and 97 scores and I need to extend the size of a column. This operation can take a long time and I'm afraid that it will block some SQL operations. What is the recommended way to do this? Can I use alter column? Should I add the column, copy values, delete the former and rename it?

    Kind regards

    Nestor Boscan

    Because you widen the column, all existing values are guaranteed to fit the new size. This should happen fairly quickly. If you were however decrease the size of a column, each value must be checked first before the DOF could complete to check it might all go in the smaller space. Change the alter table is a great value.

    If these fields are variable length (number, varchar2), then the space doesn't have to be 'reserved' for future data. Oracle will take care of the future inserts or updates of larger data as usual. If you increase a fixed length as a CHAR data type, then it is another story.

  • Data pump import a table in to another schema in 11g

    Hi all

    I have an Oracle 11.2 database and I have a requirement to import a few tables in a new scheme using my export from the previous month. I can't import any scheme as it is very large. I check REMAP_TABLE option, but it is enough to create the table in the same pattern and only rename the table.


    For example, I TABLE GAS. EMPLOYE_DATA I want to import the to GASTEST. EMPLOYEE_DATA

    Is there a way to do it using datapump?



    Appriciate any advice.

    Hello

    You can use the parameter INCLUDE in order to only select one Table:

    REMAP_SCHEMA=SCOTT:SCOTT_TEST
    INCLUDE=TABLE:"IN ('EMP')"
    

    Hope this helps.
    Best regards
    Jean Valentine

  • How to move a table to the other schema

    Salvation;

    Imagine that I have a scheme A and I create table in this scheme as a test. Now, I have a different schema, the name is B. I want to spend my A.test table to B.test2

    How can I do?

    Thank you

    If you create a new table in diagram B, the new table must be stored in tablespace default B. Unless you want the new table to be stored in a default tablespace, you need not specify a tablespace during the creation of the new table.

  • Bitmap Vs domain of index for large tables

    I have a DB warehouse which consists of very large tables.

    I have two questions:

    1 can I use Bitmap or field type index?

    2. use a different tablespace for storing the indices? This would improve performance?

    Please give me advice to improve the performance of queries for these large tables (more than 300 M record).

    Concerning

    When to use bitmap indexes

    ------------------------------

    -The column has a low cardinality: little separate value

    -Bitmap indexes are particularly useful for complex with ad-hoc queries

    long WHERE clauses or applications of aggregation (containing SUM, COUNT, or other

    aggregate functions)

    -The table contains the number of lines (1,000,000 lines with 10,000 distinct values)

    may be acceptable)

    -There are frequent, possibly ad hoc queries on the table

    -L' environment is focused on the data warehouse (System DSS). Bitmap indexes are

    not ideal for due processing (OLTP) environments transactional online

    their locking behavior. It is not possible to lock a single bitmap.

    The smallest amount of a bitmap that can be locked is a bitmap segment, which

    can be a block of data up to half size. Changing the value of a row results in

    a bitmap segment becomes locked, blocking to force change on a number of lines.

    This is a serious drawback when there are several UPDATE, INSERT or DELETE

    statements made by users. It is not a problem when loading data or

    updated to the stock in bulk, as in data warehouse systems.

    -Bitmap join indexes are a new method of 9.0 by which joins can be avoided in

    pre-creation index bitmap on the join criteria.

    The BJI is an effective way of space reduction in the volume of selected data.

    The data resulting from the join operation and restrictions are

    kept permanently in a BJI. The join condition is a join, equi-internal between the

    column/columns in primary key of the dimension and the foreign key tables

    column or columns in the fact table.

    When to use domain indexes

    ---------------------------------------

    https://docs.Oracle.com/CD/E11882_01/AppDev.112/e41502/adfns_indexes.htm#ADFNS00504

  • name.table in ODI datastore schema name

    Hello

    We have a scheme-> others-> HFM_ESB-> TIME table.

    Now, I need to connect to HFMESB. TIME table. When I try to give this table in the ODI data store, I get the message as invalid table name.

    Any suggestions on how to call and reverse engineer these tables in ODI.

    We were previously using synonyms and therefore sees no problems. But now we cannot synonyms, where to get this working...

    Appreciate the quick response that few urgent.

    Thanks in advance

    Hello

    Apparently HFM_ESB is your schema.

    First, you create the server data in ODI, if you have not done: go in the topology tab in ODI Studio, then under Physical Architecture done right click on Oracle and choose new server data. Enter a name for your database server for example TRG_MyName and specify the username/password, for example, USER: ODI_STAGING PASS: MyPass. On the JDBC specify the driver JDBC and URL, for example: oracle.jdbc.OracleDriver and jdbc:oracle:thin:@localhost:1521:orcl respectfully. Save and close the window.

    Second, on the right, click on the (new) database server and click new schema of Phisical , specify a name and then set the schema: HFM_ESB and the scheme of work: ODI_STAGING

    Please note: If you do not have ODI_STAGING diagram you can specify your HFM_ESB as a USER and working scheme

    Thirdly, under the logical Architecture right-click on Oracle technology and create new logical schema specifies a name and you can leave the Global context and choose the given (new) physical schema in the physical patterns column.

    Fourth, go to the tab of the topology to the Designer tab. In the bar of models , click on the folder and new model of. Now, on the definition tab, enter a name for your model, choose Oracle as the technology, choose the logical schema that we just created. Go to the tab of bone , here you can leave everything in the Standard/table report which means do the standard reverse engineering for all tables in the given schema. If you want only to reconstruct the table PERIOD comes down to the click tab Sélective Reverse-Engineering on all the boxes at the top, then, under the control of column name of the Table field only PERIOD box and finally click on the button of boning (at the top left).

  • How can I import tables to a different schema in the relational model existing... to add these tables in the existing model? PLSS help

    How can I import tables from a different schema in the relational model... to add these tables in the existing relational/logic model? PLSS help

    Notes; I already have a schema ready relational/logic model... and I need to add more tables to this relational/logic model

    can I import the same way like I did before?

    But even if I do the same how can I add it in the template? as the logic model has been designed...

    Help, please...

    Thank you

    To view the diagram of logic model in Bachman notation, you right-click on a blank area of the diagram and select the rating > Bachman Notation.

    David

  • Change all the triggers table in the oracle schema by using a script - possible?


    Is it possible to modify all the triggers table in the oracle schema using a single script or modify each trigger table separately is the only way?

    A couple of alternatives come to mind.

    (1) you can go in SQL Develolper, go to the schema, expand the node for triggers and select all the triggers that you want to change.  Right-click and choose the fast DDL--> save spreadsheet.  Find and replace in the worksheet and then run the script.

    (2) if the trigger is the same for all 70 tables, you can move the PL/SQL out of the trigger and in a procedure in a package and just call the procedure of 70 triggers.  Now, the code is kept in the same place.  For new tables, you can add a call to the procedure and you are done.  If all 70 triggers are not the same, see if you can pass parameters to allow a generalized procedure that they can all use.

    You have not indicated what your triggers.  Are insert you columns of auditing, archiving data, inserting into a table of log, update another table or something else?  What type of trigger is it?  What you trying to accomplish with your triggers?

    Marcus Bacon

  • The question of performance - cached data from a large table

    Hi all

    I have a general question about caching, I use an Oracle 11 g R2 server.

    I have a large table on 50 million lines, which is very often consulted by my application. Some query runs slowly and some are ok. But (of course) when the data in the table is already in the cache (so basically when a user asks the same thing twice or several times) it works very quickly.

    Does anyone have any recommendations on the caching of data / table of this size?

    Thank you very much.

    Chiwatel wrote:

    With the best formatting (I hope), I'm sorry, I'm not used to the new forum!

    Hash value of plan: 2501344126

    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    | ID | Operation | Name | Begins | E - lines. E - bytes | Cost (% CPU). Pstart. Pstop | A - lines.  A - time | Pads | Bed |  OMem |  1Mem | Used Mem.

    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    |  0 | SELECT STATEMENT |                    |      1.        |      |  7232 (100) |      |      |  68539 | 00:14:20.06 |    212K |  87545 |      |      |          |

    |  1.  SORT ORDER BY |                |      1.  7107 |  624K |  7232 (1) |      |      |  68539 | 00:14:20.06 |    212K |  87545 |  3242K |  792KO | 2881K (0) |

    2.  NESTED LOOPS |                |      1.        |      |            |      |      |  68539 | 00:14:19.26 |    212K |  87545 |      |      |          |

    |  3.    NESTED LOOPS |                |      1.  7107 |  624K |  7230 (1) |      |      |  70492 | 00:07:09.08 |    141K |  43779 |      |      |          |

    *  4 |    INDEX RANGE SCAN | CM_MAINT_PK_ID |      1.  7107 |  284K |    59 (0) |      |      |  70492 | 00:00:04.90 |    496.    453.      |      |          |

    |  5.    RANGE OF PARTITION ITERATOR.                |  70492 |      1.      |    1 (0) |  KEY |  KEY |  70492 | 00:07:03.32 |    141K |  43326 |      |      |          |

    |*  6 |      INDEX UNIQUE SCAN | D1T400P0 |  70492 |      1.      |    1 (0) |  KEY |  KEY |  70492 | 00:07:01.71 |    141K |  43326 |      |      |          |

    |*  7 |    TABLE ACCESS BY INDEX ROWID | D1_DVC_EVT |  70492 |      1.    49.    2 (0) | ROWID | ROWID |  68539 | 00:07:09.17 |  70656 |  43766 |      |      |          |

    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    4 - access("ERO".") MAINT_OBJ_CD '= 'D1-DEVICE' AND 'ERO'." PK_VALUE1 "=" 461089508922")

    6 - access("ERO".") DVC_EVT_ID '=' E '. ("' DVC_EVT_ID")

    7 filter (("E". "DVC_EVT_TYPE_CD"= "END-GSMLOWLEVEL-EXCP-SEV-1" OR "E" " DVC_EVT_TYPE_CD "=" STR-GSMLOWLEVEL-EXCP-SEV-1'))

    Your user name has run a query to return the lines 68 000 - what type of user is, a human being cannot cope eventually with that a lot of data and it's not entirely surprising that he might take a long time to return.

    One thing I would check is if you still get the same execution plan - Oracle here estimates are out by a factor of about 95 (7 100 vs 68 500 returned planned lines) may be a part of your change in the calendar refers to plan changes.

    If you check the numbers you will see about half your time came to survey the unique index, and half visited the table. In general, it is hard to beat Oracle to cache algorithms, but the indexes are often much smaller than the paintings they cover, so it is possible that your best strategy is to protect this index at the expense of the table. Rather than trying to create a cache to KEEP the index, however, you MIGHT find that you get some advantage to create a cache of RECYCLING for the table, using a small percentage of available memory - the goal is to arrange things so that the table blocks that you revisit do not grow the index blocks that you will come back to memory.

    Another detail to consider is that if you visit the index and the table completely random (for 68 500 sites), it is possible that you find yourself re-reading blocks several times during the visit. If you order the intermediate result set of driving all table first you find you're walk the index and the table in the order and that you don't have to reread all the blocks. It's something that only you can know, however.  The code will have to change to include a view of inline with a dash of no_merge and no_eliminate_oby.

    Concerning

    Jonathan Lewis

  • The virtual machine must be moved to another host?

    vCenter 4 runs on a virtual machine on vHost1. When you try to run an update for the virtual server that I have this message.

    The host has a VM VMware vCenter Update Manager or VMware installed vCenter vCenter. The virtual machine must be moved to another host for the remediation process to move forward.

    Before moving running virtual machine to a different host to vCenter, I thought I should ask if there is a better way to update vHost1?

    Thanks for your help. This is all quite new to me.

    Use VMotion to move your vCenter guest, you'll be fine.  Or if you have any DRS value fully automated when restore you one or more hosts DRS will move all of your guests.

  • Join of two tables in two different schemas

    Hi all

    I have an obligation to join two tables on two different schemas. How to join these two tables in the object view.

    Thanks in advance.

    Concerning
    Kaushik Guillaumin

    You can do just that using schema name in the +' view object.table name ' + according to the query object and also grant select the another schema to this schema user

    ex
    you need schem is test and another Act you need to a view object based on the test to join a table on shcema Act

    you write semply object sql view code:

    Act.table name

    and you can also give him select statement on Bill schem table to test

    concerning

  • Replace characters in large tables

    Hello

    I have a lot of large tables to change:
    replace 'AB' to 'C '.
    and 'FROM' to 'F'
    ...
    words to replace 10

    I can do:
    Update test set name = replace(nom,'AB','C');

    In order to avoid to analyze the 10 times tables
    I think that to use some sort of mapping table
    'AB' = > 'C '.
    'FROM' = > 'F'
    ...

    How can I do?
    Thanks in advance
    11 GR 2

    Hello

    John Spencer wrote:
    Deal is good for that sort of thing. something like:

    update test
    set nom = case when instr(nom, 'AB') != 0
    then replace(nom, 'AB', 'C')
    when instr(nom, 'DE') != 0
    then replace(nom, 'DE', 'F') end;
    

    John

    But be careful; If the same line may contain two or more, or chains under source, CASE will replace one of them. Example of FAIR, if name = "DEXYAB", the increased above it change to "DEXYC", "FXYC". To change all or part of 10 hooks under source, you can use


    • 10 expressions nested replace, as suggested, of Etbin
    • MODEL (see Query SQL
    • a WITH recursive (available in Oracle 11.2) clause or
    • a user-defined function. That's what I generally use.

Maybe you are looking for