compression of tables

We have large tables on 4Gig, we intend to compress them to save space. We also manage jobs ETL nightly that loads tables.

My question is after filling in ETL, data as ever, we enter the tables is compressed?

Can someone give an example how to compress the table?
 select * from v$version;

Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi
PL/SQL Release 10.2.0.5.0 - Production
"CORE     10.2.0.5.0     Production"
TNS for HPUX: Version 10.2.0.5.0 - Production
NLSRTL Version 10.2.0.5.0 - Production

According to http://www.oracle.com/technetwork/database/options/partitioning/twp-data-compression-10gr2-0505-128172.pdf

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Compressed tables or partitions can be changed like other Oracle tables or partitions. The data can be modified using INSERT, UPATE and REMOVE commands, for example. However, data, which are changed without the use of a loading mass insertion techniques or in bulk only will be not compressed. Deletion of compressed data is as fast as the removal of uncompressed data. Insert new data is as fast, because the data are not compressed in the classic case of insertion; It is compressed only do the bulk loading. Compressed data update may be a little slower in some cases.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

In short, the newly inserted data will be compressed if it was loaded with parallel DML or with a direct load (APPEND hint).

Lordane Iotzov
http://iiotzov.WordPress.com/

Tags: Database

Similar Questions

  • How to compress a table not tablespace.

    I just need the process of compression of my tbalespace which is 4 GB. and he had given to that.

    Hello

    If you do not specify the partition name you can compress all partitions of the Table
    with the following statement:

    ALTER TABLE  MOVE COMPRESS FOR ALL OPERATIONS;

    If you want to compress a specific partition you can specify its name:

    ALTER TABLE 
    MOVE PARTITION COMPRESS FOR ALL OPERATIONS;

    Hope this helps.
    Best regards
    Jean Valentine

  • compression of the table

    Hi guys,.

    I try to compress the table. Then I connect via Oracle SQL Developer > right click > storage > compress. All right.

    But does not change the size of the table.

    Y at - it a command to say: 'start compression for my table1 now?

    More than 50,000 rows in my table.

    Thank you!

    Compression of the data in a table does NOT change the size of the table.

    It increases just the available space in the table.

    Oracle shall never voluntarily return disk space after that she acquired it.

    Oracle is designed to reuse free space.

    Are what problem you trying to solve?

  • table compress

    Hello

    I have some queries related to compress table. I'm suppose to compress some tables. My version of the database is 11 GR 2 (11.2.0.1) and the operating system is linux. I follow the steps below:

    (1) find the size of the table. (select nom_segment, bytes of dba_segments where nom_segment = 'EMP')
    (2) alter table move compress.
    findiing 3) on the size of the table. (select nom_segment, bytes of dba_segments where nom_segment = 'EMP')
    (4) rebuild the indexes.


    should we compress the index and collect statistics on the table.


    Kindly help me on this.

    Kind regards

    Edited by: 967462 November 4, 2012 01:01

    967462 wrote:
    Hello

    I have some queries related to compress table. I'm suppose to compress some tables. My version of the database is 11 GR 2 (11.2.0.1) and the operating system is linux. I follow the steps below:

    (1) find the size of the table. (select nom_segment, bytes of dba_segments where nom_segment = 'EMP')
    (2) alter table move compress.
    findiing 3) on the size of the table. (select nom_segment, bytes of dba_segments where nom_segment = 'EMP')
    (4) rebuild the indexes.

    If you already have the reconstructed for the index, it should be good. To be sure, you can do an alter index... compresses and see if there is a space saving, but don't forget that this would happen only if the index is a non-unique index with many duplicate values.

    HTH
    Aman...

  • Create the Table as and hybrid columnar Compression

    I'm looking to connect to tables help to create the table as I had a question about columnar Compression hybrid. For the test, I found that the uncompressed daata will be approximately 10 to and compressed data will be around 1 TB. I anticipate compress the table when the table to create as an instruction and wanted to know in what order Oracle forge do compression, that is, the table will be created then Oracle will compress the data or will the compressed table that the table will be created. The motivation behind the question is to see how much storage I need to complete the operation.

    Thank you

    If you are using
    create table xxx compress for query high what to choose...

    While the data will be compressed before insertion, so in your case, it will use about 1 TB of disk space

  • PLSQL block for compression of multi tables

    I have a user xxxxxx, he owned 65 paintings, I want to compress all these tables at once
    one may suggest the write block plsql to compress all tables at once

    to compress a table "alter table table_name move compress.

    You can compress one at a time can therefore either use a loop in a PL/SQL like so block...

    DECLARE
    
           TYPE tt_TabList IS TABLE OF user_tables.table_name%TYPE;
    
           l_TabList       tt_TabList;
    
    BEGIN
    
         SELECT
               table_name
         BULK COLLECT INTO
              l_TabList
         FROM
             user_tables;
    
         FOR li_Tab IN 1..l_TabList.COUNT
         LOOP
    
             EXECUTE IMMEDIATE 'ALTER TABLE '||l_TabList(li_Tab)||' MOVE COMPRESS';
    
         END LOOP;
    
    END;
    

    Or you could generate the DDL statements and run them in SQL * more

    SELECT 'ALTER TABLE '||table_name||' MOVE COMPRESS;' FROM user_tables
    

    HTH

    David

  • Internal (zip) compress possible content table?

    I have a database with about 20 tables that contain a large amount of data.
    This database is rarely used (once for a few hours every two weeks).

    When I generate scripts SQL (with INSERTs) to recreate the data on the size of the file will be huge (about 30 GB!)

    When I zip compress these SQL scripts they will be reduced to about 20 MB!

    So the relationship between the size of the data uncompress and compressed is remarkable 1:1500

    In order to avoid a waste of space disk, I think to let Oracle compress the data internally as well.
    The final solution should work something like:

    Start Oracle 1 services.)
    (2.) to connect to the database
    3.) tell Oracle (by SQLplus command): "Hey Oracle, unpack internal all tables".
    tab1, tab2, tab3,..., t20
    4.) certain operations SQL SELECT, UPDATE,...
    5.) tell Oracle (by SQLplus command): 'Hey Oracle, now re - compress all tables internally. "
    tab1, tab2, tab3,..., t20 again
    6.) disconnection from the database
    Services stop Oracle 7).

    OK, I know of compression and decompression will take a few minutes, but it is acceptable to me.

    Is - it somehow possible?

    Peter

    The syntax is given in the doc: http://download.oracle.com/docs/cd/E11882_01/server.112/e17118/statements_7003.htm#CJABIGJI
    Example on 10 gr 2

    SQL> create tablespace mytbs datafile '/data/oracle/mydb/mytbs.dbf' size 10m default compress;
    
    Tablespace created.
    
    SQL>
    

    Take a look here to know what service needs what http://download.oracle.com/docs/cd/E11882_01/license.112/e10594/editions.htm#DBLIC116 Edition (basic Table Compression must EA).

    Nicolas.

  • How to add the column to the compressed table

    Hi gurus,

    Can someone help me how to add a column to compressed tables


    Thanks in advance

    The only difference is if added column has a default value. In this case:

    SQL> create table tbl(id number,val varchar2(10))
      2  /
    
    Table created.
    
    SQL> insert into tbl
      2  select level,lpad('X',10,'X')
      3  from dual
      4  connect by level <= 100000
      5  /
    
    100000 rows created.
    
    SQL> select bytes
      2  from user_segments
      3  where segment_name = 'TBL'
      4  /
    
         BYTES
    ----------
       3145728
    
    SQL> alter table tbl move compress
      2  /
    
    Table altered.
    
    SQL> select bytes
      2  from user_segments
      3  where segment_name = 'TBL'
      4  /
    
         BYTES
    ----------
       2097152
    
    SQL> alter table tbl add name varchar2(5) default 'NONE'
      2  /
    alter table tbl add name varchar2(5) default 'NONE'
                        *
    ERROR at line 1:
    ORA-39726: unsupported add/drop column operation on compressed tables
    
    SQL> alter table tbl add name varchar2(5)
      2  /
    
    Table altered.
    
    SQL> update tbl set name = 'NONE'
      2  /
    
    100000 rows updated.
    
    SQL> commit
      2  /
    
    Commit complete.
    
    SQL> select bytes
      2  from user_segments
      3  where segment_name = 'TBL'
      4  /
    
         BYTES
    ----------
       7340032
    
    SQL> select compression from user_tables where table_name = 'TBL'
      2  /
    
    COMPRESS
    --------
    ENABLED
    
    SQL> alter table tbl move compress
      2  /
    
    Table altered.
    
    SQL> select bytes
      2  from user_segments
      3  where segment_name = 'TBL'
      4  /
    
         BYTES
    ----------
       2097152
    
    SQL> 
    

    SY.

  • Compress or Partition or both compress table

    Hello
    I have an oracle with about 250 Mill table. records and to about the size is about 35 GB.
    I want to save space by compressing the table.

    I'm not sure that the compression is more efficient either table compress or divides within the table should be compressed, or both?

    Please advise,
    JP

    Hello

    It is a compromise between performance and manageablility.
    If you work with partitions, you give you a change to work with smaller data sets (which, in certain circumstances, unload your DB with useless IOs).

    For the rest it (space efficient) it depends largely on the nature of what you are compression (normal, blob/clob columns data... ' redundancy') and which method you use (mormal compression, compression advanced...)

    I suggest you try the various methods on a large enough sample of your table (partitioning / not paritionning, with advanced compression, normal compression).

    I also test in bulk inserts/updates / deltes (with heavy volumes) and also try main queries that you use on this table an IOs/CPU comparison

    I suggest you take a look at this:
    Implementation of the advanced compression

    (With volumes, I guess you'll receive (in most situation) compression...)

    Published by: user11268895 on July 22, 2010 11:11

  • size of the table with the compress and no compression

    Hello
    I have a table takes up 35.5 GB of space with 120 mill inside files.
    I thought that compress would save me a lot of space, so I did.

    Here's how I compressed:
    I created a separate tablespace and in the new tablespace, I created several database, files, each file is to have 2 GB.
    then insert all the records out of the original table to the new then,
    I ran the query below see also size

    SQL > select ((blocks*8192)-(blocks*avg_space)) / 1024/1024 MB size of "", empty_blocks,.
    2 avg_space, num_freelist_blocks
    user_tables 3
    4 where table_name = 'strategy '.

    Size MB EMPTY_BLOCKS AVG_SPACE NUM_FREELIST_BLOCKS
    ---------- ------------ ---------- -------------------
    2059.88281 0 0 0

    ----------------------------------------

    Select ((blocks*8192)-(blocks*avg_space)) / 1024/1024 "size MB", empty_blocks,.
    2 avg_space, num_freelist_blocks
    user_tables 3
    4 where table_name = 'strategy '.

    Size MB EMPTY_BLOCKS AVG_SPACE NUM_FREELIST_BLOCKS
    ---------- ------------ ---------- -------------------
    35504.2422 0 0 0

    in above query first a 2 GB on all resultant space after compress and the second is the original with NO size to compress.

    I have a question are these calculated dimensions are reliable. ?
    because the utility compress oracle inserts data in blocks in the database files.
    If these blocks are to halfway filled and left empty and then go to the next block and so forth.

    Is above sizes to account for empty blocks... ?

    I don't know how to explain it
    Help, please
    JP

    I don't see where you use compression. So I don't know what you mean by compress. If you mean native compression, use something like:

    SQL> create table tbl as select * from dba_source;
    
    Table created.
    
    SQL> select  sum(bytes)
      2    from  user_segments
      3    where segment_name = 'TBL'
      4  /
    
                   SUM(BYTES)
    -------------------------
                    159383552
    
    SQL> alter table tbl move compress
      2  /
    
    Table altered.
    
    SQL> select  sum(bytes)
      2    from  user_segments
      3    where segment_name = 'TBL'
      4  /
    
                   SUM(BYTES)
    -------------------------
                    124780544
    
    SQL> 
    

    SY.

  • Remove the column from the table compress

    I try drop column from table DPRUEBA, with compression option:
    SQL>select * from v$version
      2  /
    
    BANNER                                                                
    ----------------------------------------------------------------      
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi      
    PL/SQL Release 10.2.0.4.0 - Production                                
    CORE     10.2.0.4.0     Production                                            
    TNS for Linux: Version 10.2.0.4.0 - Production                        
    NLSRTL Version 10.2.0.4.0 - Production                                
    
    5 filas seleccionadas.
    
    Transcurrido: 00:00:00.09
    SQL>
    SQL>CREATE TABLE DPRUEBA
      2    (COL1 NUMBER,
      3    COL2 NUMBER) COMPRESS
      4  /
    
    Tabla creada.
    
    Transcurrido: 00:00:00.06
    SQL>
    SQL>ALTER TABLE DPRUEBA DROP COLUMN COL2
      2  /
    ALTER TABLE DPRUEBA DROP COLUMN COL2
    *
    ERROR en línea 1:
    ORA-39726: operación de agregación/borrado de columnas no soportada 
    en tablas comprimidas 
    
    
    Transcurrido: 00:00:00.06
    Any idea?

    You can always do something like this

    SQL> CREATE TABLE DPRUEBA
      2   (COL1 NUMBER,
      3    COL2 NUMBER) COMPRESS
      4  / 
    
    Table created.
    
    SQL>
    SQL> ALTER TABLE DPRUEBA DROP COLUMN COL2
      2  /
    ALTER TABLE DPRUEBA DROP COLUMN COL2
                                    *
    ERROR at line 1:
    ORA-12996: cannot drop system-generated virtual column
    
    SQL>
    SQL> create table new_DPRUEBA
      2  as
      3  select col1
      4    from DPRUEBA
      5  /
    
    Table created.
    
    SQL>
    SQL> drop table DPRUEBA
      2  /
    
    Table dropped.
    
    SQL>
    SQL> rename new_DPRUEBA to DPRUEBA
      2  /
    
    Table renamed.
    
    SQL>
    SQL> desc DPRUEBA
     Name                                      Null?    Type
     ----------------------------------------- -------- ----------------------
     COL1                                               NUMBER
    
    SQL>
    SQL>
    SQL> drop table DPRUEBA
      2  /
    
    Table dropped.
    
    SQL>
    SQL> 
    

    Note: you must take care of the constraints and triggers, and others

  • compression table option

    If I create the following table:
    create table t (no number, varchar2 (2000) var1, var2 varchar2 (2000)) compresses;

    By not specifying the number of compress, which is the default value?

    The syntax is valid, but I don't think that the 'redundant' number it does nothing for regular heap tables. If the table is an ITO (index organized table) However, it specifies the number of columns in the primary key that should be compressed - the default value is one less than the primary key.

      1  create table iot_1 (
      2     id1     number,
      3     id2     number,
      4     id3     number,
      5     v1      varchar2(10),
      6     v2      varchar2(10),
      7     constraint iot_pk primary key (id1, id2,id3)
      8  )
      9* organization index
    SQL> /
    
    Table created.
    
    SQL> select index_name,compression, prefix_length
       2 from user_indexes;
    
    INDEX_NAME           COMPRESS PREFIX_LENGTH
    -------------------- -------- -------------
    IOT_PK               DISABLED
    
    1 row selected.
    
    SQL> alter table iot_1 move compress 1;
    
    Table altered.
    
    SQL> select index_name,compression, prefix_length
      2  from user_indexes;
    
    INDEX_NAME           COMPRESS PREFIX_LENGTH
    -------------------- -------- -------------
    IOT_PK               ENABLED              1
    
    1 row selected.
    
    SQL> alter table iot_1 move compress;
    
    Table altered.
    
    SQL> select index_name,compression, prefix_length
      2 from user_indexes;
    
    INDEX_NAME           COMPRESS PREFIX_LENGTH
    -------------------- -------- -------------
    IOT_PK               ENABLED              2
    
    1 row selected.
    

    Concerning
    Jonathan Lewis
    http://jonathanlewis.WordPress.com
    http://www.jlcomp.demon.co.UK

    "The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge." Stephen Hawking.

  • Why I can't create indexes on the table of RDF data

    When I tried to create indexes on the table of RDF data, it always say the table or view does not exist. I created the RDF model using java codes:

    Oracle Oracle = new Oracle ("jdbc:oracle:thin:@localhost:1521:orcl", "system", "123");

    Chart GraphOracleSem = new GraphOracleSem (oracle, "test2");


    And used the following commands in sqlplus to create indexes:

    SQL >

    SELECT THE SEPARATE OWNER, OBJECT_NAME

    FROM DBA_OBJECTS

    WHERE TYPE_OBJET = 'TABLE '.

    4. AND OBJECT_NAME like ' % TEST2;

    OWNER

    --------------------------------------------------------------------------------

    OBJECT_NAME

    --------------------------------------------------------------------------------

    SYSTEM

    TEST2_NS

    SYSTEM

    RDFB_TEST2

    SYSTEM

    TEST2_TPL

    OWNER

    --------------------------------------------------------------------------------

    OBJECT_NAME

    --------------------------------------------------------------------------------

    SYSTEM

    RDFC_TEST2


    SQL > connect as sysdba

    Enter the password:

    Connected.

    SQL >

    SQL >

    SQL > select * from TEST2_TPL;

    Select * from TEST2_TPL

    *

    ERROR on line 1:

    ORA-00942: table or view does not exist

    SQL > CREATE INDEX test2_sub_idx ON TEST2_TPL (triple. GET_SUBJECT());

    CREATE INDEX test2_sub_idx ON TEST2_TPL (triple. GET_SUBJECT())

    *

    ERROR on line 1:

    ORA-00942: table or view does not exist

    Hi Shifu,

    It is not recommended to use the SYS or SYSTEM to store/manage schema graph RDF data.

    Can you please try the following in a SQL * more terminal?

    SQL > conn system/eu1

    Connected.

    SQL >

    SQL >

    SQL > create user graphuser identified by graphuser;

    Created by the user.

    SQL > grant connect, resources, unlimited tablespace to graphuser;

    Grant succeeded.

    SQL > conn graphuser/graphuser

    Connected.

    SQL > create table graph_tpl (triple sdo_rdf_triple_s) compress;

    Table created.

    SQL > sem_apis.create_sem_model exec ('graphic', 'graph_tpl', 'three');

    PL/SQL procedure successfully completed.

    SQL > insert into graph_tpl values (sdo_rdf_triple_s ('graph', '', '', ''));

    1 line of creation.

    SQL > select count (1) in the mdsys.rdfm_graph;

    1

    You see the same result?

    Thank you

    Zhe Wu

  • Partitioned data compression

    We use the score range based on the day. After 14 days, the data are not updated. Retain us the data for 6 months and would like to do online because he is questioned on low frequency (less than 100 requests per day). We do not use Exadata-

    My question:
    Should what compression options I -
    (a) compress the table or Table space - for all data
    (b) compresses the data after 14 days

    We also use physical data guard so we want that everything what is done to activate to replicate with the day before.

    Thanks for your help.

    Unfortunately with a physical standby force logging might be the question, like everything else you will need to test this to see the impact you are having.

    Don't forget with a physical standby is a long distance location if you have the license of advanced compression that I would look in the compression of archive recovery sent to the physical standby this can help a lot of esp in case of gusts in the short term is the physics of the day before.

    If you keep enough small partitions and then run the move out of the main operating hours and compress him redo archive, you can see that periodic bursts of daily partitiong cannot be the heavy question, that you think that perhaps, test it and see.

  • Advanced compression get lost with a simple update

    Hi all

    We have implemented AC and table size reduced from 2.1 GB to 1.5 GB, but just after a simple update query - its size has increased 3.3 GB
    Compression was made "Compress for all operations.
    Any ideas on that?

    Thank you

    To add to the comment, we also had the backup created using DEC * original table. Apply the same update on the original backup table uncompressed with 2.0 GB of size and size after that same update is 2,282 GB only.


    What is the cost of updating in advanced in 11g Compression?

    Published by: user527353 on August 20, 2009 18:01

    Smita salvation,

    number of string is not the condition that is the reason why you should not use compression for tables with a significant percentage of updates.
    any type of compression of the table

    HTH Mathias

  • Maybe you are looking for