Rebuild partition/subpartition in parallel

Hello

I'm on 11 GR 2.

I have a composite partitioned table.

I have the index local bitmap created with the parallel clause. no degree specified and left to "DEFAULT".

I made the unusable index for some of the subparts.

Now, I want to rebuild the local indexes on the server.

alter index < index_name > < partition name > rebuild subpartition;

Given that these indices were initially created with parallel clause, do I need to explicitly specify the parallel clause for them to be rebuilt at the same time?

EDIT:

I've tested this and it seems that I have to specify the parallel clause explicitly for the index to rebuild at the same time. Without the parallel clause, the index used only a thread to do the work.

Mark as answer and closed.

Responded. We need to explicitly specify

Tags: Database

Similar Questions

  • Rebuild partitioned indexes

    Hello
    I have a newbie question.
    I got a clue who have 36 partitions and subpartitions 12996.
    How I can rebuid this index?
    Thank you

    Why you rebuild?

    You can generate the instructions of reconstruction executes a something similar to the following select statement - NOTE that I DID NOT WRITE this AND haven't TESTED it

    SELECT 'ALTER INDEX | index_name | 'REBUILD ONLINE;'
    From user_indexes
    WHERE index-name NOT IN (SELECT index_name FROM user_ind_partitions)
    AND nom_tablespace = 'BI_MIFID_STG_IND. '
    UNION
    SELECT "ALTER INDEX".
    || index_name
    || "REBUILD THE PARTITION.
    || nom_partition
    || 'ONLINE';
    Of user_ind_partitions
    WHERE index-name NOT IN (SELECT index_name FROM user_ind_subpartitions)
    AND nom_tablespace = 'BI_MIFID_STG_IND. '
    UNION
    SELECT "ALTER INDEX".
    || index_name
    || "REBUILD SUBPARTITION.
    || subpartition_name
    || 'ONLINE';
    Of user_ind_subpartitions
    WHERE subpartition_name IN (SELECT nom_partition
    FROM WHERE user_segments
    WHERE nom_tablespace = 'BI_MIFID_STG_IND')
    AND nom_tablespace = 'BI_MIFID_STG_IND. '

  • Rebuild partition cubes

    It will be very useful if I can have a bit of information here.

    I made a cube partitioned on a particular level of a particular dimension (say, day of the date dimension level). While retaining the cube by using the dbms_cube.build procedure, I watched each and every partition of the cube is rebuilt and preserved for which there are in the dimension. It also keeps the partitions for which no updates/inserts took place in my fact table (on which the cube is built). This behavior is expected because the holding time is unnecessarily very high (in the case of a single records in my table of facts, but the date dimension is to have 2 years of conservation)?

    The cube is actually built on a progressive vision of the fact table based on the time of update. I use mechanism of full refresh for dimensions and quickly resolve the mechanism of cube. AWM version is 11.1.0.7A and Oracle 11.1.0.7 database.

    Any response will be appreciated.
    Thanking in advance.
    Araujo

    The behavior you describe is planned. Specifically, if you run a "quickly resolve", then the server will reload each partition in the cube, whether or not it has changed data. What I am less clear, since you don't give enough details, is to know if the aggregation (aka SOLVE) fires in each partition.

    There are many tricks that you can play to reduce the amount of work done by an accumulation, but the precise approach will depend on the amount of data you want to load every time. If you load a relatively small amount of data each time, and especially if it corresponds to only one or two partitions, then you could try the following.

    exec dbms_cube.build('my_cube using (load serial, solve)', parallelism=>2, add_dimensions=>false)
    

    The keyword SERIES tells the server to load all the mapped both table values (i.e. no partition). It's a great victory, if your table contains data for only a few partitions or if she has very few lines. SOLVE step will be parallelized, but will fire only for scores changed by loading step. Of course, you must set the parallelism to the desired level. If only one partition will be affected, you can choose the parallelism = 0.

  • PARTITIONS AND SUBPARTITIONS

    Hi all
    Here's my problem with this table:
    CREATE TABLE CKEPM.CKCLASS_ALL
    (
      DATA_POPOLAMENTO  DATE,
      OBJ_CAT           CHAR(1 BYTE),
      NODE_ID           NUMBER,
      CA_V              VARCHAR2(256 BYTE),
      CB_V              VARCHAR2(256 BYTE),
      C1                NUMBER,
      C2                NUMBER
     )
    TABLESPACE TBS_CKEPM_ALL
    PARTITION BY RANGE (DATA_POPOLAMENTO)
    SUBPARTITION BY LIST (OBJ_CAT)
    (  
      PARTITION ALL_20100130204500RQ VALUES LESS THAN (TO_DATE(' 2010-01-30 21:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
        TABLESPACE TBS_CKEPM_ALL
      ( SUBPARTITION ALL_20100130204500RQL01 VALUES ('N')    TABLESPACE TBS_CKEPM_ALL,
        SUBPARTITION ALL_20100130204500RQL02 VALUES ('L')    TABLESPACE TBS_CKEPM_ALL ),  
      PARTITION ALL_20100130210000RQ VALUES LESS THAN (TO_DATE(' 2010-01-30 21:15:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
        TABLESPACE TBS_CKEPM_ALL
      ( SUBPARTITION ALL_20100130210000RQL01 VALUES ('N')    TABLESPACE TBS_CKEPM_ALL,
        SUBPARTITION ALL_20100130210000RQL02 VALUES ('L')    TABLESPACE TBS_CKEPM_ALL )
    );
    
    CREATE BITMAP INDEX CKEPM.IX_CKCLASSALL_NODE ON CKEPM.CKCLASS_ALL
    (NODE_ID)
      TABLESPACE TBS_CKEPM_ALLX
    LOCAL (  
      PARTITION ALL_20100130204500RQ
        TABLESPACE TBS_CKEPM_ALLX
      ( SUBPARTITION ALL_20100130204500RQL01    TABLESPACE TBS_CKEPM_ALLX,
        SUBPARTITION ALL_20100130204500RQL02    TABLESPACE TBS_CKEPM_ALLX ),  
      PARTITION ALL_20100130210000RQ
        TABLESPACE TBS_CKEPM_ALLX
      ( SUBPARTITION ALL_20100130210000RQL01    TABLESPACE TBS_CKEPM_ALLX,
        SUBPARTITION ALL_20100130210000RQL02    TABLESPACE TBS_CKEPM_ALLX )
    );
    I need to collect statistics for partitions partitions/subpartitions/index of < b > ALL_20100130204500RQ < /b >, and then copy all these ALL_20100130210000RQ < /b > < b > statistics

    (1) could someone post the script DBMS_STATS.gather_table_stats and DBMS_STATS.copy_table_stats?

    I tried
    exec DBMS_STATS.gather_table_stats (ownname               => 'CKEPM',
    tabname               => 'CKCLASS_ALL',
    partname              => 'ALL_20100130204500RQ', 
    estimate_percent      => DBMS_STATS.AUTO_SAMPLE_SIZE ,
    CASCADE               => FALSE);
    
    select partition_name , last_analyzed
    from user_TAB_PARTITIONS 
    where partition_name IN ('ALL_20100130204500RQ, 'ALL_20100130210000RQ);
    
    PARTITION_NAME     LAST_ANALYZED
    ALL_20100130204500RQ     09/02/2010 8.56.20
    ALL_20100130210000RQ      
    
    exec DBMS_STATS.copy_table_stats (ownname => 'CKEPM', 
    tabname => 'CKCLASS_ALL', 
    srcpartname => 'ALL_20100130204500RQ', 
    dstpartname => 'ALL_20100130210000RQ');
    
    ORA-06533: Subscript beyond count
    ORA-06512: at "SYS.DBMS_STATS", line 16496
    ORA-06512: at line 1
    
    select partition_name , last_analyzed
    from user_TAB_PARTITIONS 
    where partition_name IN ('ALL_20100130204500RQ, 'ALL_20100130210000RQ);
    
    PARTITION_NAME     LAST_ANALYZED
    ALL_20100130204500RQ     09/02/2010 8.56.20
    ALL_20100130210000RQ     09/02/2010 8.56.20 
    (2) when I make unusable the index and rebuilding, I need also to collect statistics (how?)?

    Thank you all,
    Riccardo

    Published by: user12581838 on February 8, 2010 23:58

    Published by: user12581838 on February 9, 2010 12:48 AM
    SQL> drop table CKCLASS_ALL;
    
    Tabella eliminata.
    
    SQL>
    SQL> CREATE TABLE CKCLASS_ALL
      2  (
      3    DATA_POPOLAMENTO  DATE,
      4    OBJ_CAT           CHAR(1 BYTE),
      5    NODE_ID           NUMBER,
      6    CA_V              VARCHAR2(256 BYTE),
      7    CB_V              VARCHAR2(256 BYTE),
      8    C1                NUMBER,
      9    C2                NUMBER
     10   )
     11  PARTITION BY RANGE (DATA_POPOLAMENTO)
     12  SUBPARTITION BY LIST (OBJ_CAT)
     13  (
     14    PARTITION ALL_20100130204500RQ VALUES LESS THAN (TO_DATE(' 2010-01-30 21:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
     15   ( SUBPARTITION ALL_20100130204500RQL01 VALUES ('N')   ,
     16      SUBPARTITION ALL_20100130204500RQL02 VALUES ('L')  ),
     17    PARTITION ALL_20100130210000RQ VALUES LESS THAN (TO_DATE(' 2010-01-30 21:15:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
     18    ( SUBPARTITION ALL_20100130210000RQL01 VALUES ('N')  ,
     19      SUBPARTITION ALL_20100130210000RQL02 VALUES ('L')  )
     20  );
    
    Tabella creata.
    
    SQL> insert into CKCLASS_ALL
      2  select TO_DATE(' 2010-01-30 20:50:00', 'SYYYY-MM-DD HH24:MI:SS'),'N',0,'X','Y',0,0
      3  from dual
      4  connect by level <= 150000;
    
    Create 150000 righe.
    
    SQL>
    SQL> insert into CKCLASS_ALL
      2  select TO_DATE(' 2010-01-30 20:50:00', 'SYYYY-MM-DD HH24:MI:SS'),'L',0,'X','Y',0,0
      3  from dual
      4  connect by level <= 150000;
    
    Create 150000 righe.
    
    SQL>
    SQL> insert into CKCLASS_ALL
      2  select TO_DATE(' 2010-01-30 21:10:00', 'SYYYY-MM-DD HH24:MI:SS'),'N',0,'X','Y',0,0
      3  from dual
      4  connect by level <= 150000;
    
    Create 150000 righe.
    
    SQL>
    SQL> insert into CKCLASS_ALL
      2  select TO_DATE(' 2010-01-30 21:10:00', 'SYYYY-MM-DD HH24:MI:SS'),'L',0,'X','Y',0,0
      3  from dual
      4  connect by level <= 150000;
    
    Create 150000 righe.
    
    SQL> CREATE BITMAP INDEX IX_CKCLASSALL_NODE ON CKCLASS_ALL
      2  (NODE_ID)
      3  LOCAL (
      4    PARTITION ALL_20100130204500RQ
      5    ( SUBPARTITION ALL_20100130204500RQL01  ,
      6      SUBPARTITION ALL_20100130204500RQL02  ),
      7    PARTITION ALL_20100130210000RQ
      8    ( SUBPARTITION ALL_20100130210000RQL01  ,
      9      SUBPARTITION ALL_20100130210000RQL02  )
     10  );
    
    Indice creato.
    
    SQL> set timing on
    
    SQL> begin
      2  DBMS_STATS.gather_table_stats (ownname  => 'MAXR',
      3  tabname               => 'CKCLASS_ALL',
      4  partname              => 'ALL_20100130204500RQL01',
      5  granularity => 'SUBPARTITION',
      6  estimate_percent      => DBMS_STATS.AUTO_SAMPLE_SIZE ,
      7  CASCADE               => FALSE);
      8  end;
      9  /
    
    Procedura PL/SQL completata correttamente.
    
    Passati: 00:00:04.57
    SQL> begin
      2  DBMS_STATS.gather_table_stats (ownname               => 'MAXR',
      3  tabname               => 'CKCLASS_ALL',
      4  partname              => 'ALL_20100130204500RQL02',
      5  granularity => 'SUBPARTITION',
      6  estimate_percent      => DBMS_STATS.AUTO_SAMPLE_SIZE ,
      7  CASCADE               => FALSE);
      8  end;
      9  /
    
    Procedura PL/SQL completata correttamente.
    
    Passati: 00:00:00.29
    SQL>
    SQL> begin
      2  DBMS_STATS.copy_table_stats (ownname => 'MAXR',
      3      tabname => 'CKCLASS_ALL',
      4      srcpartname => 'ALL_20100130204500RQ',
      5      dstpartname => 'ALL_20100130210000RQ');
      6  end;
      7  /
    
    Procedura PL/SQL completata correttamente.
    
    Passati: 00:00:00.65
    
    SQL> select PARTITION_NAME, NUM_ROWS, BLOCKS, EMPTY_BLOCKS, AVG_SPACE, CHAIN_CNT, AVG_ROW_LEN, SAMPLE_SIZE, LAST_ANALYZED
      2  from dba_tab_partitions
      3  where table_owner='MAXR';
    
    PARTITION_NAME                   NUM_ROWS     BLOCKS EMPTY_BLOCKS  AVG_SPACE  CHAIN_CNT AVG_ROW_LEN SAMPLE_SIZE LAST_ANAL
    ------------------------------ ---------- ---------- ------------ ---------- ---------- ----------- ----------- ---------
    ALL_20100130204500RQ               300000       1244            0          0          0          20     10-FEB-10
    ALL_20100130210000RQ
    
    SQL> select subPARTITION_NAME, NUM_ROWS, BLOCKS, EMPTY_BLOCKS, AVG_SPACE, CHAIN_CNT, AVG_ROW_LEN, SAMPLE_SIZE, LAST_ANALYZED
      2  from dba_tab_subpartitions
      3  where table_owner='MAXR';
    
    SUBPARTITION_NAME                NUM_ROWS     BLOCKS EMPTY_BLOCKS  AVG_SPACE  CHAIN_CNT AVG_ROW_LEN SAMPLE_SIZE LAST_ANAL
    ------------------------------ ---------- ---------- ------------ ---------- ---------- ----------- ----------- ---------
    ALL_20100130204500RQL01            150000        622            0          0          0          20   150000 10-FEB-10
    ALL_20100130204500RQL02            150000        622            0          0          0          20   150000 10-FEB-10
    ALL_20100130210000RQL01
    ALL_20100130210000RQL02
    
    -- DI NUOVO DROP/CREATE/INSERT
    
    SQL> begin
      2  DBMS_STATS.gather_table_stats (ownname               => 'MAXR',
      3  tabname               => 'CKCLASS_ALL',
      4  partname              => 'ALL_20100130204500RQL01',
      5  granularity => 'SUBPARTITION',
      6  estimate_percent      => DBMS_STATS.AUTO_SAMPLE_SIZE ,
      7  CASCADE               => FALSE);
      8  end;
      9  /
    
    Procedura PL/SQL completata correttamente.
    
    Passati: 00:00:00.34
    SQL>
    SQL>
    SQL> begin
      2  DBMS_STATS.gather_table_stats (ownname               => 'MAXR',
      3  tabname               => 'CKCLASS_ALL',
      4  partname              => 'ALL_20100130204500RQL02',
      5  granularity => 'SUBPARTITION',
      6  estimate_percent      => DBMS_STATS.AUTO_SAMPLE_SIZE ,
      7  CASCADE               => FALSE);
      8  end;
      9  /
    
    Procedura PL/SQL completata correttamente.
    
    Passati: 00:00:00.18
    SQL>
    SQL> begin
      2  DBMS_STATS.copy_table_stats (ownname => 'MAXR',
      3      tabname => 'CKCLASS_ALL',
      4      srcpartname => 'ALL_20100130204500RQL01',
      5      dstpartname => 'ALL_20100130210000RQL01');
      6  end;
      7  /
    
    Procedura PL/SQL completata correttamente.
    
    Passati: 00:00:00.06
    SQL>
    SQL> begin
      2  DBMS_STATS.copy_table_stats (ownname => 'MAXR',
      3      tabname => 'CKCLASS_ALL',
      4      srcpartname => 'ALL_20100130204500RQL02',
      5      dstpartname => 'ALL_20100130210000RQL02');
      6  end;
      7  /
    
    Procedura PL/SQL completata correttamente.
    
    Passati: 00:00:00.04
    SQL>
    
    SQL> select PARTITION_NAME, NUM_ROWS, BLOCKS, EMPTY_BLOCKS, AVG_SPACE, CHAIN_CNT, AVG_ROW_LEN, SAMPLE_SIZE, LAST_ANALYZED
      2  from dba_tab_partitions
      3  where table_owner='MAXR';
    
    PARTITION_NAME                   NUM_ROWS     BLOCKS EMPTY_BLOCKS  AVG_SPACE  CHAIN_CNT AVG_ROW_LEN SAMPLE_SIZE LAST_ANAL
    ------------------------------ ---------- ---------- ------------ ---------- ---------- ----------- ----------- ---------
    ALL_20100130210000RQ               300000       1244            0          0          0          20     10-FEB-10
    ALL_20100130204500RQ               300000       1244            0          0          0          20     10-FEB-10
    
    Passati: 00:00:00.03
    SQL> select subPARTITION_NAME, NUM_ROWS, BLOCKS, EMPTY_BLOCKS, AVG_SPACE, CHAIN_CNT, AVG_ROW_LEN, SAMPLE_SIZE, LAST_ANALYZED
      2  from dba_tab_subpartitions
      3  where table_owner='MAXR';
    
    SUBPARTITION_NAME                NUM_ROWS     BLOCKS EMPTY_BLOCKS  AVG_SPACE  CHAIN_CNT AVG_ROW_LEN SAMPLE_SIZE LAST_ANAL
    ------------------------------ ---------- ---------- ------------ ---------- ---------- ----------- ----------- ---------
    ALL_20100130204500RQL01            150000        622            0          0          0          20   150000 10-FEB-10
    ALL_20100130204500RQL02            150000        622            0          0          0          20   150000 10-FEB-10
    ALL_20100130210000RQL01            150000        622            0          0          0          20   150000 10-FEB-10
    ALL_20100130210000RQL02            150000        622            0          0          0          20   150000 10-FEB-10
    

    HTH

    Max

  • Rebuild the range-Hash partitioned local index

    Hi all

    We have a table with 400 million records in our dataware House environment. It is partitioned using range-Hash Partitioning composit. He used a global Indexes previously. Last month the global indices have been changed to the index the. Since then, Exchange and index rebuild partition takes twice against the take used previously. From my understanding local index should provide availability high and more usable in the warehouse environment. Please correct me if I'm wrong and give me your suggestion. Also, I'm looking for a good material for the exchange of the partition and Index rebild with locally manage the range-hash partitioned index. Let me know if you know a good link for this...


    Thanks in advance...

    Dear krmreddy,

    Carefully read the following link and I hope this will guide you;

    http://asktom.Oracle.com/pls/asktom/f?p=100:11:0:P11_QUESTION_ID:223318100346512896

    Kind regards.

    Ogan

  • rebuild the index partition

    Hello
    I have unusable a clue. Now I want to rebuild the index for this I have to rebuild the partition and the tablespace in two different command as

    Index_name ALTER INDEX REBUILD PARTITION p1;
    Index_name ALTER INDEX REBUILD PARTITION p1 TABLESPACE INDX_TS;
    now my question is when I run first command and check the index of fof status it shows not usable. In the same way when I ran only 2nd order also rebuild successfully. If I ran only 2nd command works fine?
    why it displays usable after only rebuild partition?


    I have script for all indexes on user like this

    Select 'ALTER INDEX' | index_name |' rebuild partition '. Nom_partition | ';' in USER_IND_PARTITIONS
    Select 'ALTER INDEX' | index_name |' rebuild partition '. NOM_PARTITION | 'SPACE '. nom_tablespace | ';' in USER_IND_PARTITIONS

    Why you set all the indexes to be UNUSABLE?

    (Please don't keep saying "unuse index", do you mean "occupies an unusable index"!)

    The instance / session SKIP_UNUSABLE_INDEXES setting determines the behavior of DML. In 9i, it is false by default. In 10g, it is by default true. Thus, in the 10g (unless you have changed it back to FALSE), INSERT/DELETE operations can succeed even if an Index is UNUSABLE. However, indexes to enforce PRIMARY KEY constraints or unique indexes (index created in CREATE a UNIQUE or apply INDEX of UNIQUE constraints) cannot be ignored by normal DML.

    (Direct path operations automatically update the index and, therefore, can leave clues in an unusable state temporary.) These look like actually to DDL operations and lock the partition table/preventing simultaneous updates).

  • User_sub_partitions data dictionary not updated after you rename the partition but user_tab_partitions is

    Hello

    Using oracle 11.2.0.3

    Date range poartitioned 1 partition for each month and 4 separate secondary partitions for each month.

    and executed suite

    ALTER table retailer_transaction rename partition SYS_P64937 tablespace PART_201505;

    ALTER table move subpartition SYS_SUBP64933 tablespace RTRN_PART_201505 retailer_transaction;

    ALTER table move subpartition SYS_SUBP64934 tablespace RTRN_PART_201505 retailer_transaction;

    ALTER table move subpartition SYS_SUBP64935 tablespace RTRN_PART_201505 retailer_transaction;

    ALTER table move subpartition SYS_SUBP64936 tablespace RTRN_PART_201505 retailer_transaction;

    ALTER table retailer_transaction change the default attributes for the partition PART201505 tablespace RTRN_PART_201505;

    Check user_tab_partitions and now to see nom_partition like PART_201505 but when check user_tab_subpartitions

    a still 4 secondary partitions with a score of SYS_P64937 name

    Why is this and how do you ensure user_sub_partitions corresponds to user_tab_partitions?

    Thank you

    user5716448 wrote:

    Thanks for the reply.

    Œuvres by renaming the index partition as suggested.

    However, one last problem to try to rebuild the local bitmap index that is marked as UNUSABLE for the affected partition is ORA-14287 cannot rebuild a partition of a composite range partitioned index.

    ALTER index RTRN_SAS_IDX PART_201505 rebuild partition;

    How we can rebuild the local bitmap index to ensure it can be used - would not want to delete the local bitmap index and rebuild?

    Thank you

    [oracle@localhost ~] $14287 oerr ora

    14287, 00000, "can not REBUILD a partition of a Composite range partitioned index.

    * Cause: The user attempted to rebuild a partition of a Composite range partitioned index

    which is illegal

    * Action: REBUILD the index partition, a subpartition at a time

  • Best way to split the large partition to few smaller

    Hello

    I have Oracle 11.2 EE on Redhat 5.9 and I need to solve the problem with partitioning.

    A few tables of some system has been prepared for partitioning a few years ago. But no partitioning was done and all these tables / global index have only one partition for all data.

    Now, we have a lot of tables and indexes with a single partition (with limit MAXVALUE) for all data. I would like to divide this large partition to smaller partitions more by quarter of the year.

    Example:

    Existing partition that d0201_2008_1q must be split at D0201_2008_2Q, D0201_2008_3Q... MYTABLE_2014_4Q of column DATE/NUMBER

    I tried to generate a script for sharing partitions

    ALTER INDEX I_D0201 CHANGE DEFAULT TABLESPACE INDX_2008_1Q ATTRIBUTES;

    ALTER TABLE SPLIT PARTITION D0201_2008_1Q AT (1000456) D0201

    INTO (PARTITION D0201_2008_XX TABLESPACE DATA_2008_1Q, PARTITION D0201_MAX1) 16 PARALLELS;

    ALTER TABLE D0201 CHANGE THE D0201_2008_XX REBUILD UNUSABLE LOCAL INDEX PARTITION;

    ALTER INDEX I_D0201 CHANGE DEFAULT TABLESPACE INDX_2008_2Q ATTRIBUTES;

    ALTER TABLE SPLIT PARTITION D0201_MAX1 AT (1000547) D0201

    INTO (PARTITION D0201_2008_2Q TABLESPACE DATA_2008_2Q, PARTITION D0201_MAX2) 16 PARALLELS;

    ALTER TABLE D0201 CHANGE THE D0201_2008_2Q REBUILD UNUSABLE LOCAL INDEX PARTITION;

    ALTER INDEX I_D0201 CHANGE DEFAULT TABLESPACE INDX_2008_3Q ATTRIBUTES;

    ALTER TABLE SPLIT PARTITION D0201_MAX2 AT (1000639) D0201

    INTO (PARTITION D0201_2008_3Q TABLESPACE DATA_2008_3Q, PARTITION D0201_MAX3) 16 PARALLELS;

    ALTER TABLE D0201 CHANGE THE D0201_2008_3Q REBUILD UNUSABLE LOCAL INDEX PARTITION;

    ...

    It separates great score to two new partitions. One of them is the next quarter and secondly will separate again.

    Some partitions have a few GB and splitting takes a long time (hours for a split partition) and big disk space is also required.

    New partitions will be smaller, but first 2008_1Q partition size will be unchanged, and I'll need to reclaim the unused space somehow.

    You have a better/more rapid solution of ideas?

    Cardel wrote:

    I used DBMS_REDEFINITION once for the change not partitioned to partitioned table. But now, I have an existing partitioned table with a single partition and I want to do a simple process to divide.

    DBMS_REDEFINITION or EXPDP/IMPDP may be faster for execution, but a lot of time for preparation. I have aprox. 60 tables with some clues the and global.

    with DBMS_REDEFINITION, you have no downtime.

    ORACLE-BASE - Table online redefinition improvements in Oracle Database 10g Release 1

    DBMS_REDEFINITION.sync_interim_table

    ----

    Ramin Hashimzade

  • ORA-14299 &amp; many partitions limits per table

    Hello

    I have linked the question, see below for the definition of table and error during the insert.

    CREATE TABLE MyTable

    (

    RANGEPARTKEY NUMBER (20) NOT NULL,

    HASHPARTKEY NUMBER (20) NOT NULL,

    SOMEID1 NUMBER (20) NOT NULL,

    SOMEID2 NUMBER (20) NOT NULL,

    SOMEVAL NUMBER (32,10) NOT NULL

    )

    PARTITION BY RANGE (RANGEPARTKEY) INTERVAL (1)

    SUBPARTITION BY HASH (HASHPARTKEY) 16 SUBPARTITIONS

    (PARTITION myINITPart NOCOMPRESS VALUES LESS THAN (1));

    Insert Into myTable

    Values

    (65535,1,1,1,123.123)

    ORA-14299: total number of partitions/subpartitions exceeds the maximum limit

    I am aware of the restriction that Oracle has on a table. (Max 1024K-1 including the partitions

    subpartitions) that prevents me to create a document with the key value of 65535.

    Now I am stuck as I have more than this number (65535) ID, the question becomes how to manage

    by storing data of the older identifications this 65534?

    One of the alternatives that I thought is retirement/drop old partitions and modify the first partition

    myINITPart to store data for more partitions (which are actually retired in any way) - that I could

    having more available for store IDS.

    Therefore the PARTITION myINITPart VALUES LESS THAN (1) would be replaced by VALUES myINITPart PARTITION

    LESS THAN (1000) and Oracle will allow me to store additional data 1000 ids. My concern is Oracle

    I do not change the attributes of the original score.

    Don't we see no alternatives here? Bottomline, I want to store data for IDS higher than 65535 without restriction.

    Thank you very much

    Dhaval

    Gents,

    I want to share that I found alternative.

    Here's what I did.

    (1) merge first partition in following adjacent partition, in this way, I will eventually have an extra-tested partition, the number of limit of n + 1 partition (this is what I wanted) - so where before I do not - charge I will eventually merge the first partition (in this case, my first couple of partition will be empty anyway in order to not lose anything by merging)-faster in my case.

    (2) any index, we have will be invalidated needs to rebuild itself, I'm good that I have none.

    (3) local index is not invalidated.

    So, I was able to increase the limit of fusion just first partition in following a good - work around.

    Thank you all on this thread.

  • Range-Hash verssu Hash Partitioning purely report performance

    Hello

    We evaluate different strategies and whereas hash and range-hash partitioning.

    While the range would give us additional cleaning options, is by far out up to priority for the reports to run as fast as possible.

    Central Oraxcle 11.2.0.3 in fact warehouse using large table estaimte will be 500 GB, surrogarte key star schema and the key to the substitution of the largest dimesnion partioned-hash with the surrogate key.

    According to your experience, the side purley perfomance queruy someone found the hash is significantly greater than parrtition by range of dates, then under hash partition.

    Queries do not use partition size.

    Greater hope of qwe benfit to win partitioning uses a parallel query + score-recognition of join between columns of hash-partitioedn (on the facts and the great dimesnion table.

    Thank you

    >
    We evaluate different strategies and whereas hash and range-hash partitioning.

    While the range would give us additional cleaning options, is by far out up to priority for the reports to run as fast as possible.

    Central Oraxcle 11.2.0.3 in fact warehouse using large table estaimte will be 500 GB, surrogarte key star schema and the key to the substitution of the largest dimesnion partioned-hash with the surrogate key.

    According to your experience, the side purley perfomance queruy someone found the hash is significantly greater than parrtition by range of dates, then under hash partition.

    Queries do not use partition size.

    Greater hope of qwe benfit to win partitioning uses a parallel query + score-recognition of join between columns of hash-partitioedn (on the facts and the great dimesnion table.
    >
    Objectives statements in this thread have some of the same problems and missing information that was your other thread.
    Re: Compress all the existing table ain dat

    So I would say the same thing, that I suggested it with minor changes.

    You give us your preferred solution instead of us giving that information about the PROBLEM you're trying to solve.

    You must first focus on the problem:

    1. define the problem - indicate the desired objectives
    2. identify the options and resources available to address the problem
    3. Select one or several small, these options for assessment earn and tests.
    4 testing of possible solutions
    5. Select and implement what you consider the "best" solution
    6. monitor the results
    >
    We evaluate different strategies and whereas hash and range-hash partitioning.
    >
    Why? 1. What is the problem that you are trying to address and what are your desired goals? Partitioning is a solution - what's the problem?
    >
    While the range would give us additional cleaning options, is by far out up to priority for the reports to run as fast as possible.
    >
    Great! Do you really need or even want options housekeeping? If so, which? Don't you bulk loads? Down periodically the data? How many times? Monthly? Every year?

    What is the relationship, in your analysis between partitioning and your reports running "as fast as possible? Give us some details. Why do you partitioning in general (range or range-hash in particular) will somehow make your reports run faster? What kind of reports? The amount of data they have access to produce? The amount of data returned in fact? How many times reports do work? How much of a problem reports are now? Generally meet their SLA? Or they RARELY meet their SLA?

    Partitioning is not a remedy of performance for badly written queries. Often the most effective way to improve the performance of reports is to resolve any issues the queries themselves may have or add appropriate indexes. You have exhausted these possibilities? Have you created and examined the execution plans for your key reports? That shows this analysis?
    >
    According to your experience, the side purley perfomance queruy someone found the hash is significantly greater than parrtition by range of dates, then under hash partition.
    >
    For a partitioned table, all data are stored in individual segments; a segment for each partition.

    For a partitioned table Sub all data are stored in the individual segments; a segment for each subpartition. There is NO data stored at the partition level.

    The type of partitioning (versus range-hash hash) and the type of data (partition versus subpartion) logic has no relevance in terms of performance.

    Performance of the queries are directly proportional the number of segments that should be available, the type of access (via the index or full scan) and the size of the segment (including the amount of data).

    The number of segments that should be available depends on the ability of the Oracle to prune partitions during the analysis statically or dynamically at run time.
    >
    Queries do not use partition size.
    >
    Partitioning then generally won't be valuable performance but only for maintenance operations.
    >
    Greater hope of qwe benfit to win partitioning uses a parallel query + score-recognition of join between columns of hash-partitioedn (on the facts and the great dimesnion table.
    >
    Please explain why you think that partitioning will provide this benefit.

    Oracle PARALLEL query very well on non-partitioned tables without using partition-wise joins. The latest version of Oracle also has a DBMS_PARALLEL_EXECUTE package that provides additional features for the realization of PARALLEL operations for many of the use cases more.

    Partitioning lends itself to a natural method of CHUNKing based on the scores/subparts, but that is not necessary to get the benefit of the PARALLEL use. The exception would be if provided partitioning segments that are on different axes or decrease disk IO conflicts.

    Another missing piece of key information are the number and the type of index that needs your reports. Will you be able to use mainly LOCAL partitioned indexes? Global index tend to destroy any maintenance performance that can be learned from partitioning.

  • Inactive workers in IMPDP with parallel option

    Hello gurus,

    I took an export datapump to a partitioned table 29GB with the following parameters and was extremely quick (< 10 minutes):

    parallel = 4
    = 500MB file size
    Directory = PUMP_DIR
    estimate statistics =
    dumpfile=exp%U.dmp
    REUSE_DUMPFILES = y
    logfile = export.log
    tables = user1.t1

    Export product 4 parallel workers who were active all the time, so why the high speed.

    However when I tried to take an import datapump on the same database on an empty table (different schema), the performance was very poor (55 minutes):

    parallel = 4
    Directory = PUMP_DIR
    dumpfile=exp%U.dmp
    logfile = Import.log
    tables = user1.t1
    remap_schema = user1:user2
    TABLE_EXISTS_ACTION = add

    I noticed that parallel workers have been slowed down all the time (not applicable the degree of parallelism, I used) and importation of all was serialized.

    Can someone give me an idea why parallel workers were slowed during the IMPDP?


    [r00tb0x | http://www.r00tb0x.com]

    You see, I'll assume that you do a single importation of the data, or at the very least, your tables already exist and you import partitioned tables. If that's true, then you hit a situation that we know. Here's what happens:

    This is true for tables partitioned and sous-partitionnee and only if the Data Pump task that loads the data have not also created the table. The last part of the previous sentence is what makes this real. If you run a Data Pump task that simply creates the tables, and then run another task import which loads the data, even when they come from the same dumpfile, you will hit this situation. The situation is:

    When the task Data Pump that load data into a table partitioned or sous-partitionnée has not created the table, and then Data Pump cannot be sure
    the partitioning key is the same. For this reason, when loading the Data Pump data, it comes out a table lock and it blocks any other parallel to load in this table workers. If the task Data Pump created the table, then only a lock partition or subpartition is underwritten and other workers are free to subscribe locks on different partitions/subpartitions in the same table.

    I suppose that you see, is that all workers are trying to load data into the same table, but different partitions and workers holds the table lock. This may block other workers.

    There is a 'fix' for this, but there is a minimum difficulty. Data pump sill cannot load into several partitions, remember to remove the table lock, but instead, only 1 partitoin/subpartitoin will be set to 1. You will not see the parallel max used in these cases, but you will not see other workers waiting for an exclusive lock on the same table.

    I do not remember the number of patch and I do not remember what version it went in, but Oracle Support should be able to help with that.

    If the same Data Pump task creates the tables, or if these tables are not partitioned/sous-partitionnée, then I have not heard of this issue.

    Thank you

    Dean

  • Pass correctly index partition to a different tablespace

    Oracle Version: 10.2.0.3.0

    Hello

    I have a clue of sous-partitionnee - & gt; partitioned by range and sous-partitionnée of the list. I have to move the index to a different tablespace. I moved all the subparts successfully, but views of the index always show me references to former storage space.

    I moved all of the following subparts: ALTER INDEX MYINDEX REBUILD SUBPARTITION MYSUBPARTITION TABLESPACE NEWTABLESPACE;
    Now when I look at 'dba_segments' view corresponding index segments, they are correctly moved to the new tablespace.

    But,

    in the column 'def_tablespace_name' in view 'dba_part_indexes' value is always the old name of the tablespace.
    in the column 'nom_tablespace' in view 'dba_ind_partitions' value is always the old name of the tablespace.

    in the column 'nom_tablespace' in view 'dba_ind_subpartitions' value is correct, making reference to the new tablespace.

    What should I do to have these set correctly values (see dba_part_indexes and dba_ind_partitions)?

    I tried
    ALTER index rebuild partition mypartition tablespace newtablespace myindex;
    but I got the error:

    Error: ORA-14287
    Text: cannot REBUILD a partition of a Composite range partitioned index-
    Cause: The user attempted to rebuild a partition of a partitioned set of Composite
    index which is illegal
    Action: REBUILD the index partition, a subpartition at a time

    Thank you to



    Type of account

    St & eacute; phane

    Published by: user9928511 on March 13, 2009 11:29

    user9928511 wrote:
    Oracle Version: 10.2.0.3.0

    Hello

    I have an index-> partitioned by range and sous-partitionnée of the list subpartitioned. I have to move the index to a different tablespace. I moved all the subparts successfully, but views of the index always show me references to former storage space.

    I moved all of the following subparts: ALTER INDEX MYINDEX REBUILD SUBPARTITION MYSUBPARTITION TABLESPACE NEWTABLESPACE;
    Now when I look at 'dba_segments' view corresponding index segments, they are correctly moved to the new tablespace.

    But,

    in the column 'def_tablespace_name' in view 'dba_part_indexes' value is always the old name of the tablespace.
    in the column 'nom_tablespace' in view 'dba_ind_partitions' value is always the old name of the tablespace.

    Sub-parts are "segments" in your partitioned table. The scores are just a logical grouping of physical subpartitions. then of course, you can not rebuild and it will show the old man a tablespace name.

    def_tablespace_name can be changed with alter index:
    ALTER INDEX index-name DEFAULT ATTRIBUTES ALTER TABLESPACE nom_tablespace;

  • ALTER INDEX REBUILD and large waste area

    Hello world.

    Concerns the RDBMS EE 10.2.0.2 on a box with 16CPUs. Non-standard initialization parameters:

    db_16k_cache_size = 3G
    pga_aggregate_target = 3G
    SGA_MAX_SIZE = 12G
    SGA_TARGET = 5G
    workarea_size_policy = AUTO

    I have a large table partitioned on a monthly basis with a local couple of bitmap index on this subject. Table and index are stored in different areas of storage. The index tablespace is

    EXTENT MANAGEMENT UNIFORM LOCAL 1 M SIZE
    16K BLOCKSIZE
    SEGMENT SPACE MANAGEMENT AUTO

    Nightly batch processing allows a few partitions index unusable then inserts/adds one part of the data and rebuilds the index with

    ALTER INDEX... REBUILD PARTITION... NOLOGGING PARALLEL

    When you are finished, query on DBA_IND_PARTITIONS shows that, for all the index, partition value SCOPES is much greater than the value of used BYTES, for example one of the partitions has 106 DEGREES (1 MB each so he made 106 MB space) while only 15 MB for the BYTES.

    I understand that during the reconstruction of the parallel process slave create segments in the tablespace of the index of destination so that spend a lot more space than this segment takes finally. But it also means that the space is not released. (Deallocation/shrinkage will not help). Same thing can be demonstrated by the queries on DBA_SEGMENTS and DBA_FREE_SPACE. Because of this behavior, I have huge waste of space in the tablespace to index.

    Can someone help, please?
    Przemek

    Allocate space for parallel process slave is documented in the book 'Data warehousing database Oracle 10 g 2', chapter 25 "run in parallel to assistance.

    user2038804 wrote:
    Concerns the RDBMS EE 10.2.0.2 on a box with 16CPUs. Non-standard initialization parameters:

    When you are finished, query on DBA_IND_PARTITIONS shows that, for all the index, partition value SCOPES is much greater than the value of used BYTES, for example one of the partitions has 106 DEGREES (1 MB each so he made 106 MB space) while only 15 MB for the BYTES.

    Przemek,

    I can confirm that there is a bug in 10.2.0.2 leading to inconsistent information related to size in DBA_SEGMENTS / DBA_EXTENTS after an index rebuild in parallel to a big clue, maybe a bug 4771672 in 10.2.0.3. If I remember correctly the information of MEASUREMENT is correct and the information provided in DBA_SEGMENTS is misleading.

    The Metalink note suggests to use DBMS_SPACE_ADMIN Procedure TABLESPACE_FIX_SEGMENT_EXTBLKS to correct erroneous information in the dictionary, but I don't know if it was the one we used when encountering the problem.

    Kind regards
    Randolf

    Oracle related blog stuff:
    http://Oracle-Randolf.blogspot.com/

    SQLTools ++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676 /.
    http://sourceforge.NET/projects/SQLT-pp/

  • Move the partition of the table and get ORA-14006: invalid partition name

    I'm using oracle 11.2.0.4 and I am trying to move a partitioned table to one tablespace to another.  I checked may times and I have the correct table name and the name of the partition.  However, I get the error ora-14006.

    Can see everything what could be the problem?

    SQL > ALTER TABLE GWPROD. QRY_TES_ROLLINGCUREDITS MOVE PARTITION 201112 TABLESPACE GW_PROD_T2 PARALLEL (DEGREE 4) NOLOGGING;

    ALTER TABLE GWPROD. QRY_TES_ROLLINGCUREDITS MOVE PARTITION TABLESPACE GW_PROD_T2 PARALLEL (DEGREE 4) NOLOGGING 201112

    *

    ERROR on line 1:

    ORA-14006: invalid partition name

    Thanks in advance.

    Names that begin with numbers are not legal partition names. A fool created by placing them between double quotes. You will need to do the same.

  • ALTER INDEX with parallel and Nologging

    I want to move some indexes to a different tablespace. For this I use the SQL below:

    XYZ ALTER INDEX REBUILD TABLESPACE TS_INDX01 NOLOGGING PARALLEL 8;

    I use NOLOGGING and PARALLEL thinking it will speed up the process of movement. But now I'm confused because I think that this will change the index property, as in, when we create an index specify us certain parallels and logging properties.

    So I want to understand:

    1. the SQL above to change the properties of the index?

    2. How can I check the existing nologging and parallel properties of an index?

    SQL> create table temp (no integer, name varchar2(100));
    
    Table created.
    
    SQL> create index temp_idx on temp(no);
    
    Index created.
    
    SQL> select degree, logging from user_indexes where index_name = 'TEMP_IDX';
    
    DEGREE                                   LOG
    ---------------------------------------- ---
    1                                        YES
    
    SQL> drop index temp_idx;
    
    Index dropped.
    
    SQL> create index temp_idx on temp(no) nologging parallel 5;
    
    Index created.
    
    SQL> select degree, logging from user_indexes where index_name = 'TEMP_IDX';
    
    DEGREE                                   LOG
    ---------------------------------------- ---
    5                                        NO
    
    SQL> alter index temp_idx parallel 1 logging
      2  ;
    
    Index altered.
    
    SQL> select degree, logging from user_indexes where index_name = 'TEMP_IDX';
    
    DEGREE                                   LOG
    ---------------------------------------- ---
    1                                        YES
    

Maybe you are looking for