To access large partitioned tables over a database link - traps?

Hello

We are in the middle of a business acquisition, and I have a question on the use of links to database access efficiently to large tables. There are two distinct geographical database instances, both on Oracle 10.2.0.5 sitting on Linux boxes.

The main forum (PSHR) contains a PeopleSoft HR system and pays and is sitting in our data center.

The secondary instance (HGPAY) runs a payroll application home grown and is in a different datacenter to PSHR.

The requirement is to allow PeopleSoft (PSHR) to display data of payroll (one employee at a time) targeted to the secondary instance.

For example in HGPAY

CREATE TABLE MY_PAY_DATA AS
SELECT TO_CHAR (A.RN, ' 00000000') 'EMP' - it is a figure 8 leading 0 unique identifier
'20110' | TO_CHAR (B.RN) "PAY_PRD" - it is a format of exercise more than fifteen days in the year (01-27)
C.SOME_KEY - it is the element of remuneration being considered - effectively randomly
, 'XXXXXXXXXXXXXXXXX' "FILLER1.
, 'XXXXXXXXXXXXXXXXX' "FILLER2".
, 'XXXXXXXXXXXXXXXXX' "FILLER3".
FROM (SELECT ROWNUM 'RN' FROM DUAL CONNECT BY LEVEL < = 300) has
, (SELECT ROWNUM 'RN' FROM DUAL CONNECT BY LEVEL < = 3) B
(SELECT TRUNC (ABS (DBMS_RANDOM. (Random())) 'SOME_KEY' FROM DUAL CONNECT BY LEVEL < = 300) C
ORDER OF PAY_PRD, EMP

HGPAY. MY_PAY_DATA is the range partitioned on EMP (approximately 300 employees by partition) and the list below partitioned on PAY_PRD (3 pay periods a secondary partition). I limited the above create statement to represent a sub-partition of data.
Every employee generates an average of 300 lines in this table each pay period. The table has about 180 million lines and all fifteen days more.

In PSHR

CREATE VIEW PS_HG_PAY_DATA (MEP, PAY_PRD SOME_KEY FILLER1, FILLER2, FILLER3)
AS SELECT EMP, PAY_PRD SOME_KEY FILLER1, FILLER2, MY_PAY_DATA@HGPAY FILLER3

PeopleSoft would then generate SQL along the lines of

SELECT * FROM PS_HG_PAY_DATA WHERE EMP = '00002561' AND PAY_PRD = '201025'

The link between data centers where PSHR and HGPAY sit isn't the best in the world, but I expect dozens of hits per day rather than thousands, so I think that the link must have sufficient bandwidth to meet the requirements.

I tried a quick test on two instances of size production test and it works because it presents the data, when I look at the plan of the explain command that I can see that the remote database is only presenting the relevant secondary partition on PSHR rather than the entire table. Until I get in the back with a "job well done" - y at - it a witch hunt that I am absent using dblink to access the partitioned tables of big?

Yes, it's just. A lot of it depends on exactly what happens in different "oops" scenarios - you are, for example, just burn some CPU extra until someone comes to the DBA and says "my query is slow" or saturating the network has an impact on critical applications or long random queries prevent some maintenance operations of partition.

In my mind, the simplest possible solution (assuming you are using a username that is fixed in the database link) would be to create a profile on HGPAY for user defined for the link of database that set a value LOGICAL_READS_PER_CALL which was large enough to handle any request '' reasonable '' and low enough to quickly kill any session that has tried to do something 'stupid '. Obviously, you have to define 'stupid' in your particular environment where the scope of a 'simple reconciliation report' is not defined. If there is no political problem and you can adjust the values of profile to the wire when you encounter new reports that slowly increase what is considered '' reasonable '' is probably the most straightforward approach. If you have to put in a change request to change the parameter which must be reviewed by the Control Board change at its next quarterly meeting with the outsourced DBA seller, on the other hand, you could turn a report by 30 minutes in 30 hours over 30 days. However, in an ideal world, this is where I would start.

Becomes more complex, you can use the resource manager to kill applications running too long on the wall clock. Since the network will almost certainly the bottleneck, it is probably unlikely that the limitation of the CPU will do much good - probably you can saturate the network with a very small amount of CPU. Limitation in my mind of the network is an additional step in complexity according to the specifics of your situation and what you are competing with.

Justin

Tags: Database

Similar Questions

  • modifies an existing large partitioned table to automatically add the partitions

    Hello world

    According to me, it is not possible to modify an existing large partitioned table, to add partitions automatically as you insert it into this one. What would be the best way to do this? Dbms_redefinition? DEC + copy dependant rename?

    Any ideas?

    Thank you.

    current version:
    Oracle Database 11 g Enterprise Edition Release 11.2.0.3.0 - 64 bit Production
    With partitioning, Real Application Clusters, Automatic Storage Management, OLAP,.
    Options of Data Mining and Real Application Testing

    >
    According to me, it is not possible to modify an existing large partitioned table, to add partitions automatically as you insert it into this one. What would be the best way to do this? Dbms_redefinition? DEC + copy dependant rename?

    Any ideas?
    . . .
    I guess what I'm looking for is
    ALTER TABLE XXX INTERVAL DEFINED (NUMTOYMINTERVAL(1,'MONTH'));

    The following problem, I see is my required interval varies 01 hours of daylight at the time month 01 day 01 01 of the following month.
    >
    The best way to do it is to change your way of thinking.

    Do not be afraid to break the Oracle actually trying things. Just ALTER the table.

    drop table part_test cascade constraints;
    
    create table part_test(id number, entry_date date)
    partition by range (entry_date)
    (
    partition old_years values less than (to_date('01/01/2013 01:00:00', 'mm/dd/yyyy hh24:mi:ss')),
    partition jan_2013 values less than (to_date('02/01/2013 01:00:00', 'mm/dd/yyyy hh24:mi:ss'))
    )
    
    insert into part_test values(1, to_date('01/01/2013 00:00:00', 'mm/dd/yyyy hh24:mi:ss'));
    
    insert into part_test values(2, to_date('01/01/2013 02:00:00', 'mm/dd/yyyy hh24:mi:ss'));
    
    alter table part_test set interval(numtoyminterval(1,'MONTH')); 
    
    insert into part_test values(2, to_date('02/01/2013 02:00:00', 'mm/dd/yyyy hh24:mi:ss'));
    
    SQL> set serveroutput on
    SQL> select partition_name, high_value
      2  from dba_tab_partitions
      3  where table_name = 'PART_TEST';
    
    PARTITION_NAME
    ------------------------------
    HIGH_VALUE
    --------------------------------------------------------------------------------
    
    OLD_YEARS
    TO_DATE(' 2013-01-01 01:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
    
    JAN_2013
    TO_DATE(' 2013-02-01 01:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
    
    SYS_P225
    TO_DATE(' 2013-03-01 01:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
    
    SQL>
    

    Note the limit of the score for SYS_P225 that was created? It includes the same ' time 1' of the highest range partition that existed before the ALTAR.

  • Amount from column of a table of a database link

    Hello community of the oracle,

    I can access an external database via a database link. is it possible in SQL or PL/SQL to get the number of columns in a table known? External database systems can be different, MySQL for now, but in future DBMS such as MSSQL. All I need is the number of columns.

    Ikrischer

    is it possible in SQL or PL/SQL to get the number of columns in a table known?

    Do you mean smth like

    select count(*) from all_tab_columns@remote_db where table_name = 'YOUR_TABLE_NAME'
    

    ?

  • Change the Data Type of column on large partitioned Table Tablet

    Hello

    I was tasked to change a certain NUMBER of collar to a colonel VARCHAR2

    It is usually an easy task, but I ran into several challenges due to the size of the table, the second major problem is the compression of the table.

    To begin with, the Table and index are 4.4 to and hold around 11 lines billion. That the Table is compressed, not the index.

    The first option was to add a new collar of VCHAR, update from the Col de NUM, file and then drop the colonel NUM


    We ran this test in the Test environment and because discovered Compression Table that you can only mark the column is not used, and it can only be moved permanently from the Table if you unzip the whole Table and run «...» REMOVE the UNUSED COLUMNS'.

    The second way is to create a separate Table copy with the values of the source of neck, then write to null to the source of neck, change data Type when its empty and then update the source again with the copy Table values.

    The second option, that's what I'm testing, but I work with nearly 30 million lines by daily score, inserts take very long to fill.

    How can I make faster inserts between the Source and the copy Table?

    What other channels follow to accomplish the task above?

    Concerning

    Stephan

    Nowhere in your proposed solution was necessary for you to understand why the data source changes, its as simple that to take what is ons the table (all the facts in the original post) and offering the best way on the basis of previous experience or oracle skills acquired over time.

    If you spent so much time and thought to help with a solution that you did on WHY complain, you could have provided some feasible solutions already.

    No thanks to your information irrelevant and unnecessary, I've implemented a solution already.

    1 export the Source Table

    2. create a 'replica Table' using the DDL source using a temporary table name

    3. import all Partitions in the Table new

    4. before the Cut-Over, synchronize the changes from the Source to the replica Table

    5 rename the Source

    5 rename the Table of replica for the Original Source Table name

    6. slide the original Source Table

  • Large partition table with the primary key referenced by several child tables

    I tested this procedure and it works perfectly. Although when children tables refer to the key column, after execution of dbms_redefinition.finish_redef_table they all are referring to table exercised. In test I deal forced FK droped on tables children and recreated them. What could be the best way to do this in this scenario?

    Thanks for all comments/opinions!

    Are you sure? You should end up with the original FK constraint (disabled and renamed to have a prefix of $$ TMP) pointing to the staging table (which is of course the famous original table), and allowed a new constraint with the correct name, pointing to the new version of the table. so all you need to do to bring order to the top is to remove the constraint of renamed.

    This running-in of the SCOTT schema clearly indicates:

    create the table deptint in select * from dept where 1 = 2;

    exec dbms_redefinition.start_redef_table (user, 'dept', 'deptint')

    n number of var

    exec dbms_redefinition.copy_table_dependents (user, "dept', 'deptint', DBMS_REDEFINITION.") CONS_ORIG_PARAMS, TRUE, TRUE, TRUE, TRUE,: n)

    Print n

    exec dbms_redefinition.finish_redef_table (user, 'dept', 'deptint')

    Select CONSTRAINT_NAME, CONSTRAINT_TYPE, TABLE_NAME, user_constraints STATE;

  • No access to the data in a database linked when calling procedures in the APEX 5

    Hello

    I use

    • APEX 5.0.3
    • APEX DB: Oracle DB 12 c
    • DB related: Oracle DB 11 g

    When you call procedures and packages of APEX-side on the related DB, I can't access the data with a "select...". "in the tables on the DB related.

    Is it because of the different versions of DB?

    Is there a general setting in my APEX 5.0.3 I need to access the data in the tables on the DB related?

    Any help appreciated.

    Thanks in advance.

    Concerning

    Norbert

    Hello

    Thanks for the reply.

    But at least we do the upgrade again.

    ... when editing a dblink and recompile the schema it all works.

    Concerning

    Norbert

  • Call a procedure and a function over a db link

    I feel a few errors with the following and would be very happy in the view of some experts here.

    My use case is to insert records into a table via a database link. Inserted records will be interviewed for a same table in the local data dictionary. Everything works, but sometimes, I get a unique constraint violation if I'm trying to insert a duplicate record, so I wrote a simple function to check for the scenario. My problem is that I can run my procedure using the link to the db and I can run my function using the db link, but I can't use the two together without errors.

    My test scenario uses only the standard emp table:

    create or replace procedure test_insert(p_instance varchar2)
    IS
    l_sql varchar2(4000);
    begin
        l_sql := 'insert into EMP@'||p_instance||' (EMPNO, ENAME, JOB, MGR, SAL, DEPTNO) (Select EMPNO, ENAME, JOB, MGR, SAL, DEPTNO from EMP)';
    execute immediate l_sql;
    END;
    
    

    BEGIN
    test_insert('myLink');
    END;
    
    

    It works very well and the insert occurs without any problem.

    If I run the same process a second time, I get:

    00001 00000 - "forced single (s.%s) violated" which is what I've been waiting for EMPNO has a unique constraint. So far so good.

    Now, I create a function to check if the record exists:

    create or replace function record_exists(p_empno IN NUMBER, p_instance IN varchar2) return number
    IS
    l_sql varchar2(4000);
    l_count number;
    BEGIN
    l_sql := 'select count(*) from EMP@'||p_instance||' where empno = '||p_empno;
    execute immediate l_sql into l_count;
    IF
    l_count > 0
    THEN return 1;
    ELSE
    return 0;
    END IF;
    END;
    
    

    I test this situation as follows:

    select record_exists(8020, 'myLink') from dual;
    
    

    RECORD_EXISTS(8020,'myLink')

    -------------------------------------------

    1

    That works well, so now I'll add this feature to my procedure:

    create or replace procedure test_insert(p_instance varchar2)
    IS
    l_sql varchar2(4000);
    begin
        l_sql := 'insert into EMP@'||p_instance||' (EMPNO, ENAME, JOB, MGR, SAL, DEPTNO) (Select EMPNO, ENAME, JOB, MGR, SAL, DEPTNO from EMP WHERE record_exists( EMPNO, '''||p_instance||''') = 0)';
    execute immediate l_sql;
    END;
    
    

    I test this situation as follows:

    BEGIN
    test_insert('myLink');
    END;
    
    

    As a result:

    Error report:
    ORA-02069: global_names parameter must be set to TRUE for this operation
    ORA-06512: at "FUSION.TEST_INSERT", line 6
    ORA-06512: at line 2
    02069. 00000 -  "global_names parameter must be set to TRUE for this operation"
    *Cause:    A remote mapping of the statement is required but cannot be achieved
               because global_names should be set to TRUE for it to be achieved
    *Action:   Issue alter session set global_names = true if possible
    
    

    I don't know why I'm getting this. The function works, the works of the procedure, but when I combine I get an error. If I set the global setting to true names and then run again I get:

    
    02085. 00000 -  "database link %s connects to %s"
    *Cause:    a database link connected to a database with a different name.
               The connection is rejected.
    *Action:   create a database link with the same name as the database it
               connects to, or set global_names=false.
    
    

    All opinions are appreciated. I do not understand why I can run the procedure and function of each separately on the db connection, but they don't work together.

    Thank you

    John

    The procedure depends on what how would you define failure and it should mean - error to the caller, sign and continue, just continue. Constraints are created to ensure if it receives no invalid data in a table. Generally, they provide the most effective mechanism for the verification of the invalid data and return useful exceptions that can deal with the appellant. You also double the workload of the checking uniqueness by adding your own control.

    In general I would say use Exceptions, they are your friend

  • partitioned for large logs table

    Hi, we are looking to use oracle 11g as a newspaper database with partitioned tables.

    -The tables will have only a 3-5 columns of size ~ varchar (50)
    -We look at a volumn of ~ 33million lines (inserts) per day.

    (1) partitioned tables will be able to manage this type of volume?
    (2) if so, is a Composite partitioning (last updated the datetime using as range) then subparition with the range will be the best choice?

    If this type of volume is too high for 11g, what are some alternative products, we can use.

    Thank you

    >
    This is to store the history of the price of the stock. History table is a better word than the journal, the table is queried by date/key.

    We seek to 30mil a day for conservation of 2 years. It is about 21 billion lines.

    can a handful of oracle partitioned table this kind of size with a yield acceptable guess we use partition function Beach and has a cluster on the queried key/date key.
    >
    Also curly said: sointenly!

    You said the "cluster" key Do you mean the partition key?

    So just create a partitioned table INTERVAL partitioned by day.

    DROP TABLE SCOTT.PARTITION_DAILY_TEST CASCADE CONSTRAINTS;
    
    CREATE TABLE SCOTT.PARTITION_DAILY_TEST
    (
      ID       NUMBER(10),
      MY_DATE  DATE
    )
    PARTITION BY RANGE (MY_DATE)
    INTERVAL( NUMTODSINTERVAL(1,'DAY'))
    (
      PARTITION P_EXISTING_20120401 VALUES LESS THAN (TO_DATE(' 2012-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    );
    

    New partitions will be created automatically, and you can easily remove older as you wish.

  • Link to database to a partitioned table

    Hello

    I'm running 10g and have a link database to another database that has partitioned tables.
    Is it possible to query tables partitioned via a link?

    Mattias

    Hello

    You cannot query a particular partition via db link.
    Workaround: use where clause on partitioned column.

    Concerning
    Anurag

  • Partitioned Tables and indexes

    Hello


    I have a question on the table and index partitioning. My scenario is:

    Charge 2 mio records in table T once a month. Loaded records are added to existing records, and once loaded data is never changed.
    At some point, I want to delete the older recordings, so I intend to this partition table.

    T table looks like:
    create table t (id       number(10) not null  constraint t_pk primary key,
                    period   number(10) not null,
                    contract number(10) not null,
                    attr     number(10) not null);
    
    create unique index t_ux1 on t(contract,period);
    
    create index t_ix2 on t(period);
    My plan is to partition T over the period, and I'm trying to read through the concepts
    http://download.Oracle.com/docs/CD/B19306_01/server.102/b14220/partconc.htm#g471747


    My question is now, how to manage the indexes, the t_pk, the t_ux1 and the t_ix2. Concepts of say,

    «1. If the table partitioning column is a subset of index keys, use a local index.»

    "2. If the index is unique, use a global index. If this is the case, you are finished. »


    So, that's how I read it
    -t_pk is unique, so this should be global
    -t_ux1 of columns is a subset, unless I have misunderstood (?), which should be local
    -index t_ix2 column is the same as the partitioning column, so it must be local

    Is this right, this t_ux1 should be a local partioned index, even if the period is the second column in the index?

    If true, what will happen when a partion fell?


    I am new in this area, so please feel the comment as you wish.


    Concerning
    Peter


    BANNER
    ----------------------------------------------------------------
    Oracle Database 10 g Enterprise Edition release 10.2.0.3.0 - 64bi
    PL/SQL version 10.2.0.3.0 - Production
    CORE Production 10.2.0.3.0
    AMT for IBM/AIX RISC System/6000: Version 10.2.0.3.0 - production
    NLSRTL Version 10.2.0.3.0 - Production

    Peter Gjelstrup wrote:

    My question is now, how to manage the indexes, the t_pk, the t_ux1 and the t_ix2. Concepts of say,

    «1. If the table partitioning column is a subset of index keys, use a local index.»

    "2. If the index is unique, use a global index. If this is the case, you are finished. »

    So, that's how I read it
    -t_pk is unique, so this should be global
    -t_ux1 of columns is a subset, unless I have misunderstood (?), which should be local
    -index t_ix2 column is the same as the partitioning column, so it must be local

    Is this right, this t_ux1 should be a local partioned index, even if the period is the second column in the index?

    A partitioned index locally can only be defined as unique if the partition key is part of the columns in the index. Imagine what the database would have to do if this is not the case: in order to verify if a newly added or updated value violates the uniqueness, it will have to travel all the partitions in a serialized operation - means that no one else could do the same thing at the same time. Since he is a killer of serious scalability in terms of locking and contention, this is not allowed.

    So: Your T_UX1 index can be defined as a unique index that is local because it contains the partition key. Although the index is not prefixed ("Prefix" means that it is divided by the left side of the columns in the index) which means that there may be access patterns where all partitions should be scanned or the optimizer cannot use a method of size of effective partition according to the way the index is reached.

    Your T_PK index cannot be set as local because it must be unique (you can not use a local non-unique index in this case), but does not contain your partition key. It must be a global index. An overall index can be partitioned as well (different from the underlying table) but it doesn't have to be.

    Depends on how you access your data you have not T_IX2 index when partitioning by this key because it corresponds to the partition key and therefore could not actually be used by the mechanism of partition pruning that limit your query to the scores of individuals.

    If you have more than one MAS environment where running queries are used longer, you should be fine with the index the in general (because they could be analyzed in parallel in parallel operations), but if you have an OLTP environment, then you should avoid local no prefix indexes due to the potential problem that you need to analyze all partitions.

    Be borne in mind that with partitioning adds an important layer of complexity to other areas: in particular the options available to the optimizer and analyze cost optimizer statistics. Depends on how you access your statistical data must be maintained on several levels now (level of score and at the global level, in the case of subpartitioning may be still at this level). If your data is important and you rely on "global" level statistics (these are always the case when the optimizer at the time analysis cannot limit access to a single partition) then in the pre - 11 g databases analyze these "global" level statistics can take a lot of time and resources, since actually , you need data several times (once for the partition and even global level).

    Presenting this partitioning may mean other potential problems in terms of execution that change (not for the better sometimes) plans and how to effectively collect statistics. Note that g 11 addresses the issue of 'statistics' by introducing the so-called "extra" global statistics. Greg Rahn wrote a [blog note | http://structureddata.org/2008/07/16/oracle-11g-incremental-global-statistics-on-partitioned-tables/] on this nice feature.

    >

    If true, what will happen when a partion fell?

    Since you're already on 10g, you can specify the database to update the scores of the local index using the UPDATE of the INDEX clause, while 9i could maintain only an overall index and it is up to you to rebuild the local index partitions after the partition DDL on the table (according to the DDL operation).

    Kind regards
    Randolf

    Oracle related blog stuff:
    http://Oracle-Randolf.blogspot.com/

    SQLTools ++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676 /.
    http://sourceforge.NET/projects/SQLT-pp/

    Published by: Randolf Geist on Sep 30, 2008 16:39

    Added statistics / optimizer warning when you use the partitioning

  • partition table

    When we need to use the partition table?

    Partitioning defined

    The concept of divide and conquer has been around since the time of Sun Tzu (500 BC). Recognizing the wisdom of this concept, Oracle applied to management of large tables and indexes. Oracle has continued to evolve and refine its capabilities of partitioning since its first implementation of the Oracle 8 range partitioning. Oracle 8i and 9i, Oracle has continued to add features and new methods of partitioning. The current version of Oracle 9i Release 2 continues this tradition by adding new features to the list partitioning and partitioning method new range-list.

    When the partition

    There are two main reasons to use partitioning in an environment of VLDB (very large database). These reasons are related to improving the management and performance.

    Partitioning offers:

    • Management at the level of individual partition for data loads, creating indexes and reconstruction, and backup/restore. This may cause more quickly down, because only actively managed individual partitions are not available.
    • Performance of the queries increased by selecting only the relevant partitions. This process of weeding out the partitions that do not contain the data required by the application via a technique called the size of the partition.

    Use the partitioning:

    • When a table reaches a size "large". Large being defined according to your environment. Greater than 2 GB should always be considered for partitioning of tables.
    • What performance benefits outweigh the extra management issues related to partitioning.
    • When the archive data is on a calendar and repetitive. For example, data warehouses are usually given for a certain period of time (rolling window). Old data is then output to archive.
  • Best way to split the large partition to few smaller

    Hello

    I have Oracle 11.2 EE on Redhat 5.9 and I need to solve the problem with partitioning.

    A few tables of some system has been prepared for partitioning a few years ago. But no partitioning was done and all these tables / global index have only one partition for all data.

    Now, we have a lot of tables and indexes with a single partition (with limit MAXVALUE) for all data. I would like to divide this large partition to smaller partitions more by quarter of the year.

    Example:

    Existing partition that d0201_2008_1q must be split at D0201_2008_2Q, D0201_2008_3Q... MYTABLE_2014_4Q of column DATE/NUMBER

    I tried to generate a script for sharing partitions

    ALTER INDEX I_D0201 CHANGE DEFAULT TABLESPACE INDX_2008_1Q ATTRIBUTES;

    ALTER TABLE SPLIT PARTITION D0201_2008_1Q AT (1000456) D0201

    INTO (PARTITION D0201_2008_XX TABLESPACE DATA_2008_1Q, PARTITION D0201_MAX1) 16 PARALLELS;

    ALTER TABLE D0201 CHANGE THE D0201_2008_XX REBUILD UNUSABLE LOCAL INDEX PARTITION;

    ALTER INDEX I_D0201 CHANGE DEFAULT TABLESPACE INDX_2008_2Q ATTRIBUTES;

    ALTER TABLE SPLIT PARTITION D0201_MAX1 AT (1000547) D0201

    INTO (PARTITION D0201_2008_2Q TABLESPACE DATA_2008_2Q, PARTITION D0201_MAX2) 16 PARALLELS;

    ALTER TABLE D0201 CHANGE THE D0201_2008_2Q REBUILD UNUSABLE LOCAL INDEX PARTITION;

    ALTER INDEX I_D0201 CHANGE DEFAULT TABLESPACE INDX_2008_3Q ATTRIBUTES;

    ALTER TABLE SPLIT PARTITION D0201_MAX2 AT (1000639) D0201

    INTO (PARTITION D0201_2008_3Q TABLESPACE DATA_2008_3Q, PARTITION D0201_MAX3) 16 PARALLELS;

    ALTER TABLE D0201 CHANGE THE D0201_2008_3Q REBUILD UNUSABLE LOCAL INDEX PARTITION;

    ...

    It separates great score to two new partitions. One of them is the next quarter and secondly will separate again.

    Some partitions have a few GB and splitting takes a long time (hours for a split partition) and big disk space is also required.

    New partitions will be smaller, but first 2008_1Q partition size will be unchanged, and I'll need to reclaim the unused space somehow.

    You have a better/more rapid solution of ideas?

    Cardel wrote:

    I used DBMS_REDEFINITION once for the change not partitioned to partitioned table. But now, I have an existing partitioned table with a single partition and I want to do a simple process to divide.

    DBMS_REDEFINITION or EXPDP/IMPDP may be faster for execution, but a lot of time for preparation. I have aprox. 60 tables with some clues the and global.

    with DBMS_REDEFINITION, you have no downtime.

    ORACLE-BASE - Table online redefinition improvements in Oracle Database 10g Release 1

    DBMS_REDEFINITION.sync_interim_table

    ----

    Ramin Hashimzade

  • Compression is disabled when performing the update on partitioned tables


    Hi all

    I'm on Oracle Database 11 g Enterprise Edition Release 11.2.0.3.0.

    My question is related to the Compression of the Oracle.

    I have an infra partitioned table enabled with Compression of the base. Active compressed State, I update some columns in this table (command Update Normal and merger as well) but the end result shows the increase in the size of the table and the compression is still in the ACTIVE State. After that, if I compress the secondary partition explicitly the table back to its original size.

    Is this a bug? I read a whitepaper on 11g itself, compression is turned on in the case of all DML operations, so why this behavior?

    Thank you

    Ishan

    Ishan,

    take a look at http://docs.oracle.com/cd/E11882_01/server.112/e25494/tables.htm#CJAGFBFG , it seems that the distinction between OLTP and compression of base is sometimes a bit vague ("operations that allow compression include:...") "), but I also find the statement"insert rows inserted without using direct-path access and updated rows are uncompressed. "So I would say it's not a bug but a limitation of the service. Updates just to not mix well with compression.

    Martin

  • migration in partitioned table

    Hello world

    I need to migrate on a two table range-list partitioned table, one of them is partitioned list, the other is not. The largest contains 400 million lines (partitioned), the smallest contains 90 million lines. The database cannot insert all of the table at a time, so I started to migrate the tables into pieces. I have two options:

    1 create pieces using a date_field (IDO_ID)
    2. create the pieces with the help of a generated field (MIGR_RANK) using the list field (TAB_KOD) and a number generated randomly, which limits the lines to migrate a line no more than 1 million of partitioning

    My questions are the following:
    1. is it possible that the insert is slow due to the large number (about 750) list partitions? Can I use the 2. method, inserting only one partition? Or a kind of simple is enough for partitioning columns? How does the insert works in Oracle for a partitioned table? Especially in a range-list partitioned table?
    2. How can I adjust the database to migrate two tables in one step (if possible)?

    Thanks in advance,
    Gabor

    Published by: csiszgab on Sep 28, 2012 07:47

    Whenever you provide post your Oracle version 4-digit (result of SELECT * FROM V$ VERSION).
    >
    I need to migrate on a two table range-list partitioned table, one of them is partitioned list, the other is not. The largest contains 400 million lines (partitioned), the smallest contains 90 million lines. The database cannot insert all of the table at a time, so I started to migrate the tables into pieces. I have two options:
    >
    -Oracle is certainly able to process the entire table in one so why do you say otherwise?

    You can speed up insert PARALLEL aid (for the source and target tables), using the direct-path load (APPEND indicator) and NOLOGGING mode on the target table.
    >
    1. is it possible that the insert is slow due to the large number (about 750) list partitions? Can I use the 2. method, inserting only one partition? Or a kind of simple is enough for partitioning columns? How does the insert works in Oracle for a partitioned table? Especially in a range-list partitioned table?
    >
    Your "2" is not just insert in a partition. You said your target table was partitioned ' range-list' but your source is only 'list' partitioned. Therefore a partition target: data from the "one" won't a single list.

    Load using the entire table at once without (other than the partition key) index on the target table.

  • importing into a partitioned table of interval 11g

    as I took export utility simple partition table 8i exp not rained so 100 k lines in there.

    and imported with the import utility in the interval of 11g partitioned based on the date column.

    There were imported, but did not what I expected...

    If we execute the simple insert for partition interval 11g command, it create new partition automatically according to the strategy of partition.

    Here's the demo...

    created range partitioned table on the date with shift interval column...

    CREATE TABLE TEST.xxx_HIST
    (
    xxx_DATE DATE NOT NULL,
    P_ROLL_CONVENTION CHAR (2),
    R_ROLL_CONVENTION CHAR (2),
    P_COMPOUNDING_IND CHAR (2),
    R_COMPOUNDING_IND CHAR (2),
    P_CALC_METHOD CHAR (2),
    R_CALC_METHOD CHAR (2),
    P_SPREAD_AMT NUMBER (28,12).
    R_SPREAD_AMT NUMBER (28,12).

    )
    partition by range (xxx_DATE)
    interval (numtoyminterval(3,'MONTH'))
    store (security)
    (
    values of pQ1 lower partition (to_date('2010-01-01','yyyy-mm-dd'))
    ) IN PARALLEL.


    -IMPORTED FROM ROWS IN THE TABLE...

    ======================================================================
    Connected to: Oracle Database 11 g Enterprise Edition Release 11.1.0.7.0 - 64 bit Production
    With partitioning, OLAP, Data Mining and Real Application Testing options

    Export file created by EXPORT: V08.01.07 direct

    CAUTION: objects have been exported by SYSTEM, not by you

    . import of xx_ARCH in TEST objects
    . . import of 141749 lines imported from the table 'xxx_HIST '.
    Import completed successfully without warnings.
    ========================================================================



    -HE HAS A LOT OF DATES OF DIFF IN THERE...



    SQL > SELECT COUNT (DISTINCT xxx_DATE) TEST.xxx_HIST;

    COUNT (DISTINCT xxx_DATE)
    -----------------------------
    1371


    28-MARCH 06
    10 FEBRUARY 06
    9 FEBRUARY 05
    20 FEBRUARY 02
    3 JUNE 02
    10 MAY 04
    26 DECEMBER 03
    31 JANUARY 03

    xxx
    ---------
    21 JULY 08
    31 OCTOBER 05
    25 APRIL 08
    28 APRIL 08
    12 OCTOBER 06
    DECEMBER 21 07
    28 DECEMBER 04


    -BUT STILL ALL DUMPED INTO A PARTITION


    SQL > SELECT nom_partition FROM DBA_TAB_PARTITIONS WHERE TABLE_OWNER = 'TEST ';

    NOM_PARTITION
    ------------------------------
    PQ1

    It all dumped in a partition...

    fact partition interval 11g creates the partition automatically in function whose lines if imported... when we import lines in there...? or am I missing something?

    any idea guys?

    Seems to be a poor strategy for me because if I am not mistaken, there is no way to specify the order of the imported lines. If you import a line with the date max as your first row... bang, you get a range partition created for you and the rest falling.

    I think you'd be better import these data into a table in step and then by a

    insert into new_fancy_partition_table
    select *
    from old_8_temporary_imported_table
    order by date_column asc
    

    Or create the partitions manually.

    I just realized that you specify a partition in your create table statement (missed that on cursory inspection). And I think you misunderstand how the interval works... it's for values LARGER than the existing partitions ONLY...

    http://download.Oracle.com/docs/CD/E11882_01/server.112/e10592/statements_7002.htm#SQLRF01402

    "
    INTERVAL clause

    Use this clause to set the interval of partitioning the table. Range partitions are partitions based on a digital range interval or datetime. * They extend from range partitioning by commanding the database to automatically create partitions of the specified range or interval when the data inserted in the table exceed all the partitions.* range
    "

    Published by: Tubby on August 16, 2010 18:32

    Additional document link.

Maybe you are looking for