How to load 100 million lines of Table Partioned

Hi all

I have a job in the application of VLDB.

I have a Table with 5 columns
For ex - A, B, C, D, Date_Time

I CREATED THE (DAILY) RANGE TABLE ON COLUMN (DATE_TIME) COMPARTMENTALIZED.

CREATED THE NUMBER OF INDEXES FOR THE EX,
INDEX ON A
COMPOSITE ON DATE_TIME, B, C

REQUIREMENT
--------------------
NEED TO LOAD ABOUT 100 MILLION RECORDS IN THIS TABLE EVERY DAY (IT WILL LOAD VIA SQL LOADER OR TEMPORARY TABLE (INSERT INTO ORIG SELECT * TEMP)...)

QUESTION
---------------
TABLE IS INDEXED, SO I NEVER AM NOT ABLE TO USE SQLLDR FEATURE DIRECT = TRUE.

SO I WOULD LIKE TO KNOW WHAT IS THE BEST WAY OF AVILABLE TO LOAD THE DATA INTO THIS TABLE?

Note--> don't FORGET not I can't DELETE AND CREATE INDEXES every DAY because of the HUGE AMOUNT of DATA.

LiangGangYu wrote:
Exchange partition would be your best friend in this case, because all the existing or to-be-bulit indexes are partitioned locally.

Daily load,
1. create a temporary table, ex_temp, with the same structure as the target table.
2 load direct - sqlldr or external tables - path in the temporary table. You can do all the stuff of fantasy here to get the best performance without impact on the target table, 'ex '.
3 build all indexes on the temporary table
4. the swap partition the temporary table with the correct partition of the table target. This is a DDL, update of metadata. Very fast.

ALTER TABLE CALL
EX_partition PARTITION EXCHANGE WITH TABLE EX_temp
INCLUDING INDEXES
WITHOUT VALIDATION.

Please see the documentation for more details. for example http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/partiti.htm#sthref2762

Hello

Do not forget that after the swap operation you should be collecting stats partition
and if you have a lot of multiple queries partition adjust a stats table too.

Kind regards
Marcin Przepiorowski
http://oracleprof.blogspot.com/

Tags: Database

Similar Questions

  • Copy the 35 million lines of table line 186 million

    OK, I tried to accomplish this feat for the last 4 nights. There is some limitation of I/O with this storage. Basically, my boss want a subset of an array of rank 186 million recorded in another table (without causing performance degradation... so we have a window of 10 hours to do it in). Let me throw you in the fact that it's a primary on a standby dataguard... so all newspapers are shipped to the standby site. What I tried was:

    (a) create table TableA_subset in select * from TableA where conditon < = 10000 [returns 30 million lines] - it took more than 10 hours... was to kill him
    (b) export using datapump with a query condition [query = "where conditon < = 10000"] the lines I wanted to keep - he ran for more than 10 hours... was to kill him

    given the backup site turns impossible disconnection (since rebuiliding that Eve will take much too long)... Really, I have only 10 hours to get there... and I could not... AND the table continues to grow... Suggestions as to how I can get these data, truncate the existing table and return the table to the table of origin would be greatly appreciated. I tried loading direct path (despite sleep)... Help, please

    JG, in my view, that there is a value of the consideration. This approach would reduce the load on the existing production server I/o. This should be feasible, as long as the test system is the same version and running on hardware of compatiable.

    HTH - Mark D Powell.

  • How to load the .xlsx file in table

    Hi all

    We all hope you are doing well.

    I have a test.xlsx file, which is kept in one place on unix server, now I want to load laod this file in the test table.

    I tried with charger but not get relationships, I tried with positional charger, with positional charger loading its data but some value of thing as waste? |||  something like that...

    My big question is - is it possible to load the .xlsx file into table in oracle?

    can I use the external table? or laoder?

    Thanks to all in advance

    Hello

    Here is a solution to your question:

    https://technology.AMIS.nl/2013/01/19/read-a-Excel-xlsx-with-PLSQL/

    Kind regards

    Bashar

  • How to load multiple files into multiple tables using a Controlfile?

    Hello world

    I have four different tables with similar structures, get the data from four different data files. I would like to use one control file to load the data from four different files from four different tables.

    Here's the DOF of the tables:

    CREATE TABLE Product_Sales(  
        Year_of_Sale NUMBER,  
        Product_Type VARCHAR2(25 CHAR),  
        Product_Group VARCHAR2(25 CHAR),  
        Category_Type VARCHAR2(25 CHAR),  
        Category_Group VARCHAR2(10 CHAR),  
        Product_Count NUMBER,  
        Product_Amount NUMBER(19,2),  
        Category_Count NUMBER,  
        Category_Amount NUMBER(19,2)  
    )  
    
    

    CREATE TABLE Retail_Sales(  
        Year_of_Sale NUMBER,  
        Product_Type VARCHAR2(25 CHAR),  
        Product_Group VARCHAR2(25 CHAR),  
        Category_Type VARCHAR2(25 CHAR),  
        Category_Group VARCHAR2(10 CHAR),  
        Product_Count NUMBER,  
        Product_Amount NUMBER(19,2),  
        Category_Count NUMBER,  
        Category_Amount NUMBER(19,2)  
    )  
    
    

    You still have products_sales instead of product_sales in when your article, so it does not load anything in the product_sales table.  You have not reset the position for the first after subsequent field in the table and when clauses, then it starts looking for the first field to the position you left it in the previous section, instead of 1, so he can't find anything and does load all the data in the household_sales table.  You need to reset the position 1 for each combination of table and what clause after the first.  The first argument is optional.  Please see the corrected below control file.

    DOWNLOAD THE DATA

    INFILE 'output.txt '.

    IN THE PRODUCT_SALES TABLE TRUNCATE

    WHEN filename = "Product_Sales".

    FIELDS ENDED BY ',' POSSIBLY FRAMED BY ' '.

    TRAILING NULLCOLS

    (

    file name of FILLING,

    Year_of_Sale,

    Product_Type,

    Product_Group,

    Category_Type,

    Category_Group,

    Product_Count,

    EXTERNAL DECIMAL Product_Amount,

    Category_Count,

    EXTERNAL DECIMAL Category_Amount

    )

    IN THE HOUSEHOLD_SALES TABLE TRUNCATE

    WHEN filename = "Household_Sales".

    FIELDS ENDED BY ',' POSSIBLY FRAMED BY ' '.

    TRAILING NULLCOLS

    (

    filename FILLER POSITION (1),

    Year_of_Sale,

    Household_Type,

    Product_Group FILLING,

    Category_Type FILLING,

    Category_Group FILLING,

    Product_Count,

    EXTERNAL DECIMAL Product_Amount,

    Category_Count,

    EXTERNAL DECIMAL Category_Amount

    )

  • How to load the values into a table?

    In my jspx page, I have a combo and a table... Based on the values of the drop-down list, when I click on a button, I want to load the values into a table... The data in the table are from 5 database tables. I created a viewObject readonly... What to do to load the values from the table, I click on a button?

    Ensure that you have defined a variable to bind your view object.

    Read-only or not, this is what would make the Execute with action of parameters available.

    John

  • How to load n.of lines line in a table using sqlldr

    Hi Master,

    I have a file .csv with 500 records. I wanted to load only 100 first records in a table. Is it possible to load records into a table using sql * loader?

    Please advise...!

    Concerning

    SA

    You can use the LOAD option

    SQL * Loader command line reference

  • How to remove newly added line in table advanced

    Hello

    It comes to 12.1.3.

    I have a case of use with advanced table and the "Add Row" button.

    The user can add two lines (empty), field 2 (both part of PK, the two LOVs), then start filling, then realizes he added too many, because the TooManyObjectsException exception is thrown. What would be the 'Nice' way of having the last added row deleted, or some newly added lines to delete if a few lines have been added (before actually trying to commit). I mean how must reference any row that has been added and should be removed - this line do always not PK (PK because not even adjustable due to violation of PK if this makes sense). Is there a fun/easier way to do it? Would be great if someone could share a line or two...

    Thank you

    Anatoliy

    PS

    I have a delete button in a footer with action to fire the event as 'remove' in LICS, I like below, and copy the following code deletes the LAST record in the advanced table, not the one where the cursor is?

    If ("delete".equals (pageContext.getParameter (EVENT_PARAM)))
    {
    String rowRef = pageContext.getParameter (OAWebBeanConstants.EVENT_SOURCE_ROW_REFERENCE);
    Line OARow = (OARow) am.findRowByRef (rowRef);
    Row.Remove ();
    }

    Asmirnov-Oracle wrote:

    Hello

    Yes, I actually try this approach too.

    Have what approach you tried? 1 or 2?

    If you try to follow the second approach, you can try this?

    OAViewObject srcAppvo = (OAViewObject)findViewObject("SrcAppVO1");
    if (srcAppvo != null) {
        RowSetIterator srcAppRowItr =
            srcAppvo.createRowSetIterator("srcAppRowItr");
    
        try {
            Row srcAppRow;
            srcAppRowItr.setRangeStart(0);
            srcAppRowItr.setRangeSize(srcAppvo.getFetchedRowCount());
            int recCnt = srcAppvo.getFetchedRowCount();
            while (srcAppRowItr.hasNext()) {
                srcAppRow =
                        srcAppRowItr.next();
                if (srcAppRow != null) {
                    if (srcAppRow.getAttribute("SelectFlag") != null &&
                        srcAppRow.getAttribute("SelectFlag").toString().equals("Y")) {
                        srcAppRow.remove();
                    }
                }
            }
        } finally {
            srcAppRowItr.closeRowSetIterator();
        }
    }
    
  • How to load csv file into external table

    I'm using oracle 11g xe edition.

    probably error-boy, but I can't "see

    Command line error: column 21:12

    Error report-

    SQL error: ORA-00906: lack of left parenthesis

    create the table rgc_ext

    (

    First name VARCHAR2 (20).

    name vARCHAR2 (20)

    )

    EXTERNAL ORGANIZATION

    (

    TYPE ORACLE_LOADER

    CER default DIRECTORY

    ACCESS SETTINGS

    (

    RECORDS DELIMITED BY NEWLINE

    FIELDS

    (

    First name VARCHAR2 (20).

    name vARCHAR2 (20).

    Sate DOB

    )

    )

    LOCATION 'info.csv '.

    );

    Hello

    Try with the following text:

    create the table rgc_ext

    (

    First name VARCHAR2 (20).

    name vARCHAR2 (20)

    )

    EXTERNAL ORGANIZATION

    (

    TYPE ORACLE_LOADER

    CER default DIRECTORY

    ACCESS SETTINGS

    (

    RECORDS DELIMITED BY NEWLINE

    ' fields completed by ',' EVENTUALLY ENCLOSED BY ' "'

    (

    First name VARCHAR2 (20).

    name vARCHAR2 (20).

    Sate DOB

    )

    )

    LOCATION ("info.csv")

    );

  • How to remove 60 million lines

    Hi all

    I have table tb1 with 60 million and I have another painting tb2 with 30 million

    I want to delete tab2 regarding the one table

    For this I wrote a procedure to dellete this. but when iam trying to run after 4 minutes of the procedure does not work due to large data:

     PROCEDURE temp (id IN NUMBER)
    AS
       CURSOR c
       IS
          SELECT order
            FROM tab1
           WHERE (status = 'A'
                  AND billdt < ADD_MONTHS (SYSTIMESTAMP, -12))
                 OR (status IN ('B','C')
                     AND orderdt < ADD_MONTHS (SYSTIMESTAMP, -12));
    
       TYPE ordernum IS TABLE OF tab1.order%TYPE;
    
       order1   ordernum;
    BEGIN
    
       OPEN c;
    
       LOOP
          FETCH c
          BULK COLLECT INTO order1
          LIMIT 1000;
    
          FORALL i IN 1 .. order1.COUNT
             DELETE FROM tb2
                   WHERE id = order1 (i);
    
          EXIT WHEN c%NOTFOUND;
       END LOOP;
       COMMIT;
        EXCEPTION
          WHEN OTHERS
          THEN
             ROLLBACK;
           END ;
           
    can someone help me!

    Thanks in advance

    When possible, you should try a solution based game (for example, a SQL statement).

    This gives a shot:

    DELETE FROM tb2
    WHERE  order IN ( SELECT order
                      FROM   tab1
                      WHERE  (   status = 'A'
                             AND billdt < ADD_MONTHS(SYSTIMESTAMP, -12)
                             )
                      OR     (   status IN ('B','C')
                             AND orderdt < ADD_MONTHS(SYSTIMESTAMP, -12)
                             )
                    )
    ;
    

    If it is still too slow, so we have to look at alternative methods. These threads can be interesting to read this way.

    {message: id = 1812597}

    {: identifier of the thread = 863295}

  • How to load the g/l account table

    Hi guys, hope you all r right.
    I wana to download account to gl.can anyone guide me? can you provide me with an example script?

    Kind regards
    SK

    If you have set up the account table manually in another case, you can download and transfer to another through FNDLOAD.
    Include in My Oracle Support, search after that FNDLOAD, then you can transfer the defined accounting Flex field.

    Everything is possible for the values of the account dashboard.

    Another option is to use Dataload (search tool loading and downloading on the Internet).
    A model for the values of the segment exists, above this, you can download via Excel-> Dataload values in Oracle.

    Dirk

  • How can I improve record speed of 100 millions of queries?

    How can I improve record speed of 100 millions of queries?

    When I ask several condition they have queries slow about 200 to 500 seconds.

    3070066 wrote:

    How can I improve record speed of 100 millions of queries?

    When I ask several condition they have queries slow about 200 to 500 seconds.

    There are 4 ways.

    Faster hardware, faster disks, more big pipes, I/O etc. In other words, reduce costs (depending on the time) to the IO has 100 million lines.

    Parallel processing. Instead of a single process serialized bed a 100 million lines, using a 100 in parallel process, each reading of 1 million rows.

    Planning of roads I/O better and faster the query whether does not read a 100 million lines, or to read a 100 million lines (of say only 1 KB each), reading a 100 million entries index (for example, 50 bytes).

    Rethink the requirement of the business that requires reading/treatment one lines 100 million. Or rethink the model of data requiring a 100 million lines to deal with this requirement of the company.

    There is no option 5, which uses a magic wand that can be rough to make this process quick. Analyze the performance of the query (based on the links already provided in responses above) is the first step in the process of identifying the real problem and how to solve it.

  • 1 million lines of loading

    Hi all, I have a problem who would like to solve as quickly as possible.
    I have a table with primary keys that this line 1 million. This table is not partition. I would like to load 1 million rows in this table.
    I'm afraid that it will take a lot of time because of the primary key and analizing of the table. not sure if sqlldr will be a good choice. I use oracle 10g

    is - anyone can give some suggesstions on how to load this data, available in oracle 10g the fastest way?
    Moreover, the 1 million line is located in several files. I have also 1 million rows in an intermediate table, only insert would be quick to insert 1 million in another table that already have 1 million rows

    1 million rows is not really that much. I don't know where your concerns arise? You have to do some serious processing on these data or just a direct load?

    SQL Loader works fine, as well as EXTERNAL TABLES... preferably making TRACE if possible DIRECT operations.

    [Documentation | http://www.oracle.com/technology/documentation/database10gr2.html] should be able to go from there.

  • Tuning sql insert that inserts 1 million lines makes a full table scan

    Hi Experts,

    I'm on Oracle 11.2.0.3 on Linux. I have a sql that inserts data into a table of History/Archives of a table main application based on the date. The application table has 3 million lines. and all the lines that are more then 6 months old must go in a table of History/Archives. This was decided recently, and we have 1 million rows that meet this criterion. This insertion in table archive takes about 3 minutes. Plan of the explain command shows a full table scan on the main Board - which is the right thing, because we are pulling 1 million rows in the main table in the history table.

    My question is that, is it possible that I can do this sql go faster?

    Here's the query plan (I changed the names of table etc.)

       INSERT INTO EMP_ARCH
       SELECT *
    FROM EMP M
    where HIRE_date < (sysdate - :v_num_days);
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        2      0.00       0.00          0          0          0           0
    Execute      2     96.22     165.59      92266     147180    8529323     1441230
    Fetch        0      0.00       0.00          0          0          0           0
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total        4     96.22     165.59      92266     147180    8529323     1441230
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: FIRST_ROWS
    Parsing user id: 166
    Rows     Row Source Operation
    -------  ---------------------------------------------------
    1441401   TABLE ACCESS FULL EMP (cr=52900 pr=52885 pw=0 time=21189581 us)
    
    
    

    I heard that there is a way to use opt_param tip to increase the multiblock read County but did not work for me... I will be grateful for suggestions on this. can collections and this changing in pl/sql also make it faster?

    Thank you

    OrauserN

    (1) create an index on hire_date

    (2) tip 'additional' use in the 'select' query '

    (3) run ' alter session parallel DML'; before you run the entire statement

  • How to select a line of table on click using javascript?

    I have an ADF table with some data. If the user clicks on a cell in the table, I want to do something for the whole line.

    However, the event.getSource () method only gives me the column that was clicked.

    How can I find the line using Javascript?

    Hello

    Try the approach given by frank at https://forums.oracle.com/thread/2205398

  • After dragging table how can we single selection line enabled?

    Hello world

    I dragged a single table and created two buttons. Now I need the table to be selected (while dragging I don't check this value)
    After dragging table how can we single selection line enabled?

    Thank you.

    Hi Kumar,

    Add this attribute to the af: table:

    rowSelection="single"
    

    AP

Maybe you are looking for