1 million lines of loading

Hi all, I have a problem who would like to solve as quickly as possible.
I have a table with primary keys that this line 1 million. This table is not partition. I would like to load 1 million rows in this table.
I'm afraid that it will take a lot of time because of the primary key and analizing of the table. not sure if sqlldr will be a good choice. I use oracle 10g

is - anyone can give some suggesstions on how to load this data, available in oracle 10g the fastest way?
Moreover, the 1 million line is located in several files. I have also 1 million rows in an intermediate table, only insert would be quick to insert 1 million in another table that already have 1 million rows

1 million rows is not really that much. I don't know where your concerns arise? You have to do some serious processing on these data or just a direct load?

SQL Loader works fine, as well as EXTERNAL TABLES... preferably making TRACE if possible DIRECT operations.

[Documentation | http://www.oracle.com/technology/documentation/database10gr2.html] should be able to go from there.

Tags: Database

Similar Questions

  • Copy the 35 million lines of table line 186 million

    OK, I tried to accomplish this feat for the last 4 nights. There is some limitation of I/O with this storage. Basically, my boss want a subset of an array of rank 186 million recorded in another table (without causing performance degradation... so we have a window of 10 hours to do it in). Let me throw you in the fact that it's a primary on a standby dataguard... so all newspapers are shipped to the standby site. What I tried was:

    (a) create table TableA_subset in select * from TableA where conditon < = 10000 [returns 30 million lines] - it took more than 10 hours... was to kill him
    (b) export using datapump with a query condition [query = "where conditon < = 10000"] the lines I wanted to keep - he ran for more than 10 hours... was to kill him

    given the backup site turns impossible disconnection (since rebuiliding that Eve will take much too long)... Really, I have only 10 hours to get there... and I could not... AND the table continues to grow... Suggestions as to how I can get these data, truncate the existing table and return the table to the table of origin would be greatly appreciated. I tried loading direct path (despite sleep)... Help, please

    JG, in my view, that there is a value of the consideration. This approach would reduce the load on the existing production server I/o. This should be feasible, as long as the test system is the same version and running on hardware of compatiable.

    HTH - Mark D Powell.

  • Tuning sql insert that inserts 1 million lines makes a full table scan

    Hi Experts,

    I'm on Oracle 11.2.0.3 on Linux. I have a sql that inserts data into a table of History/Archives of a table main application based on the date. The application table has 3 million lines. and all the lines that are more then 6 months old must go in a table of History/Archives. This was decided recently, and we have 1 million rows that meet this criterion. This insertion in table archive takes about 3 minutes. Plan of the explain command shows a full table scan on the main Board - which is the right thing, because we are pulling 1 million rows in the main table in the history table.

    My question is that, is it possible that I can do this sql go faster?

    Here's the query plan (I changed the names of table etc.)

       INSERT INTO EMP_ARCH
       SELECT *
    FROM EMP M
    where HIRE_date < (sysdate - :v_num_days);
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        2      0.00       0.00          0          0          0           0
    Execute      2     96.22     165.59      92266     147180    8529323     1441230
    Fetch        0      0.00       0.00          0          0          0           0
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total        4     96.22     165.59      92266     147180    8529323     1441230
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: FIRST_ROWS
    Parsing user id: 166
    Rows     Row Source Operation
    -------  ---------------------------------------------------
    1441401   TABLE ACCESS FULL EMP (cr=52900 pr=52885 pw=0 time=21189581 us)
    
    
    

    I heard that there is a way to use opt_param tip to increase the multiblock read County but did not work for me... I will be grateful for suggestions on this. can collections and this changing in pl/sql also make it faster?

    Thank you

    OrauserN

    (1) create an index on hire_date

    (2) tip 'additional' use in the 'select' query '

    (3) run ' alter session parallel DML'; before you run the entire statement

  • SQL Tuning for 50 million lines - group

    For example, I have 50 million + lines of a table with columns Customer_ID, Order_ID, and trans_date. I need a query to return the customers with more than 2 levels in the last 60 days.



    [sql]

    Select Customer_ID de table
    trans_date between SYSDATE-60 et sysdate
    Group by customer_id
    having County ()customer_id) > = 2

    [SQL]


    I'm trying to understand if there is a faster alternative to the Group of and. Especially for large sets of data. Thanks in advance for your help/suggestions.

    Hello

    This looks like the best way to do it.

    Is that what you posted exactly what you use?  Apparently, you have changed the FROM clause; I bet that your table is not really the TABLE name.  You made other changes that you didn't think significant?

    Is there an index on trans_date?  On how many lines 50 million have trans_dates in this range of 60 days?

    See the FAQ forum:

    https://forums.Oracle.com/message/9362003

    If you want clients with more than 2 levels, then change the HAVING clause

    HAVING COUNT (*) 2 >

    "> = 2" includes clients with exactly 2 orders.

  • Update statement against 1.4 million lines

    Hello

    I'm trying to run and update statement against a table with more than 1 million lines in it.

    NAME Null? Type
    ------------------------------- --------- -----
    NOT NULL NUMBER PR_ID (12.0)
    PR_PROP_CODE NOT NULL VARCHAR2 (180)
    CLOB VALUE (4000)
    SHRT_DESC VARCHAR2 (250)
    VAL_CHAR VARCHAR2 (500)
    VAL_NUM NUMBER (12.0)
    VAL_CLOB CLOB (4000)
    UNIQUE_ID NUMBER (12.0)

    The update that I'm trying to do is to take the VALUE of the column and based on some update of the parameters one of the three columns. When
    I run the sql, it's just there without error. I gave him 24 hours before killing the process.

    UPDATE PR. PR_PROP_VAL PV
    THE PV VALUE. VAL_CHAR =)
    SELECT a.value_char FROM
    (
    Select
    PPV. UNIQUE_ID,
    CASE ppv.pr_prop_code
    WHEN "BLMBRG_COUNTRY" THEN to_char (ppv.value)
    WHEN "BLMBRG_INDUSTRY" THEN to_char (ppv.value)
    WHEN "BLMBRG_TICKER" THEN to_char (ppv.value)
    WHEN "BLMBRG_TITLE" THEN to_char (ppv.value)
    WHEN "BLMBRG_UID" THEN to_char (ppv.value)
    WHEN "BUSINESSWIRE_TITLE" THEN to_char (ppv.value)
    WHEN "DJ_EUROASIA_TITLE" THEN to_char (ppv.value)
    WHEN "DJ_US_TITLE" THEN to_char (ppv.value)
    WHEN "FITCH_MRKT_SCTR" THEN to_char (ppv.value)
    WHEN "ORIGINAL_TITLE" THEN to_char (ppv.value)
    WHEN "RD_CNTRY" THEN to_char (ppv.value)
    WHEN "RD_MRKT_SCTR" THEN to_char (ppv.value)
    WHEN "REPORT_EXCEP_FLAG" THEN to_char (ppv.value)
    WHEN "REPORT_LANGUAGE" THEN to_char (ppv.value)
    WHEN "REUTERS_RIC" THEN to_char (ppv.value)
    WHEN "REUTERS_TITLE" THEN to_char (ppv.value)
    WHEN "REUTERS_TOPIC" THEN to_char (ppv.value)
    WHEN "REUTERS_USN" THEN to_char (ppv.value)
    WHEN "RSRCHDIRECT_TITLE" THEN to_char (ppv.value)
    WHEN "SUMMIT_FAX_BODY_FONT_SIZE" THEN to_char (ppv.value)
    WHEN "SUMMIT_FAX_TITLE" THEN to_char (ppv.value)
    WHEN "SUMMIT_FAX_TITLE_FONT_SIZE" THEN to_char (ppv.value)
    WHEN "SUMMIT_TOPIC" THEN to_char (ppv.value)
    WHEN "SUMNET_EMAIL_TITLE" THEN to_char (ppv.value)
    WHEN "XPEDITE_EMAIL_TITLE" THEN to_char (ppv.value)
    WHEN "XPEDITE_FAX_BODY_FONT_SIZE" THEN to_char (ppv.value)
    WHEN "XPEDITE_FAX_TITLE" THEN to_char (ppv.value)
    WHEN "XPEDITE_FAX_TITLE_FONT_SIZE" THEN to_char (ppv.value)
    WHEN "XPEDITE_TOPIC" THEN to_char (ppv.value)
    END value_char
    of pr.pr_prop_val on the map
    If ppv.pr_prop_code not in
    ('BLMBRG_BODY', 'ORIGINAL_BODY', 'REUTERS_BODY', 'SUMMIT_FAX_BODY',
    'XPEDITE_EMAIL_BODY', 'XPEDITE_FAX_BODY', 'PR_DISCLOSURE_STATEMENT', 'PR_DISCLAIMER')
    ) a
    WHERE
    a.UNIQUE_ID = pv.unique_id
    AND a.value_char is not null
    )
    /


    Thanks for any help you can provide.

    Graham

    What about this:

    UPDATE pr.pr_prop_val pv
    SET    pv.val_char = TO_CHAR(pv.value)
    WHERE  pv.pr_prop_code IN ('BLMBRG_COUNTRY', 'BLMBRG_INDUSTRY', 'BLMBRG_TICKER', 'BLMBRG_TITLE', 'BLMBRG_UID', 'BUSINESSWIRE_TITLE',
                               'DJ_EUROASIA_TITLE', 'DJ_US_TITLE', 'FITCH_MRKT_SCTR', 'ORIGINAL_TITLE', 'RD_CNTRY', 'RD_MRKT_SCTR',
                               'REPORT_EXCEP_FLAG', 'REPORT_LANGUAGE', 'REUTERS_RIC', 'REUTERS_TITLE', 'REUTERS_TOPIC', 'REUTERS_USN',
                               'RSRCHDIRECT_TITLE', 'SUMMIT_FAX_BODY_FONT_SIZE', 'SUMMIT_FAX_TITLE', 'SUMMIT_FAX_TITLE_FONT_SIZE',
                               'SUMMIT_TOPIC', 'SUMNET_EMAIL_TITLE', 'XPEDITE_EMAIL_TITLE', 'XPEDITE_FAX_BODY_FONT_SIZE', 'XPEDITE_FAX_TITLE',
                               'XPEDITE_FAX_TITLE_FONT_SIZE', 'XPEDITE_TOPIC')
    AND    pv.value IS NOT NULL
    
  • 20 million lines of production induced by error: ORA-01653: unable to extend table

    Exact error is: ORA-01653: unable to extend the table HR. F by 1024 in the SYSTEM tablespace.
    Why can't he extend table HR. F?
    Is it because I have Oracle Express Edition 10g?
    "Usually you receive one of the following messages appears during the upgrade if your SYSTEM tablespace size is insufficient. What should I consider when we want to generate 20 million lines? Moreover, generating 1 million rows is successful.

    Your data files are not CanGrow or there is no space on the disk where they reside.

    Run the following:

    select file_name, bytes, maxbytes, maxbytes-bytes free_bytes, autoextensible
    from dba_data_files
    where tablespace_name='SYSTEM';
    

    If CanGrow is NOT so you can set it to YES executing:

    ALTER database datafile '> ' autoextend on;

    Otherwise, there is no more space on the device...

    Max
    [My Italian blog Oracle | http://oracleitalia.wordpress.com/2010/01/17/supporto-di-xml-schema-in-oracle-xmldb/]

  • How to load 100 million lines of Table Partioned

    Hi all

    I have a job in the application of VLDB.

    I have a Table with 5 columns
    For ex - A, B, C, D, Date_Time

    I CREATED THE (DAILY) RANGE TABLE ON COLUMN (DATE_TIME) COMPARTMENTALIZED.

    CREATED THE NUMBER OF INDEXES FOR THE EX,
    INDEX ON A
    COMPOSITE ON DATE_TIME, B, C

    REQUIREMENT
    --------------------
    NEED TO LOAD ABOUT 100 MILLION RECORDS IN THIS TABLE EVERY DAY (IT WILL LOAD VIA SQL LOADER OR TEMPORARY TABLE (INSERT INTO ORIG SELECT * TEMP)...)

    QUESTION
    ---------------
    TABLE IS INDEXED, SO I NEVER AM NOT ABLE TO USE SQLLDR FEATURE DIRECT = TRUE.

    SO I WOULD LIKE TO KNOW WHAT IS THE BEST WAY OF AVILABLE TO LOAD THE DATA INTO THIS TABLE?

    Note--> don't FORGET not I can't DELETE AND CREATE INDEXES every DAY because of the HUGE AMOUNT of DATA.

    LiangGangYu wrote:
    Exchange partition would be your best friend in this case, because all the existing or to-be-bulit indexes are partitioned locally.

    Daily load,
    1. create a temporary table, ex_temp, with the same structure as the target table.
    2 load direct - sqlldr or external tables - path in the temporary table. You can do all the stuff of fantasy here to get the best performance without impact on the target table, 'ex '.
    3 build all indexes on the temporary table
    4. the swap partition the temporary table with the correct partition of the table target. This is a DDL, update of metadata. Very fast.

    ALTER TABLE CALL
    EX_partition PARTITION EXCHANGE WITH TABLE EX_temp
    INCLUDING INDEXES
    WITHOUT VALIDATION.

    Please see the documentation for more details. for example http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/partiti.htm#sthref2762

    Hello

    Do not forget that after the swap operation you should be collecting stats partition
    and if you have a lot of multiple queries partition adjust a stats table too.

    Kind regards
    Marcin Przepiorowski
    http://oracleprof.blogspot.com/

  • How do I see the lines not loaded by interface errors

    Hello, I have a simple interface, I'm loading of data within the same oracle database from one scheme to another scheme.
    But I see less 3 rows.
    Can some body please advice how to see lines that are not loaded, which justified etc..

    Thank you

    Hello

    You can also see in the ODI opening model--> right click on your target--> Check--> error data store...
    It shows you the E table $ table _ for this specific data store. For each rejected row, you can see all of the columns more the reason for the rejection.

    It will be useful,

    Jerome en

  • A black bar with command line down load automatically - how to remove

    Some time ago I noticed that a black bar began to appear at the bottom of the firefox window. There > > brand on the left, a suite command line to report without delay, which allows me to write, and an icon key and 'x' on the right. When I hover the mouse over the key, he says "switch developer tools" (or something similar in the sense - I translate from Polish). I spent the better part of an hour to seek solutions on the web, but I'm at my wit's end now - I don't know yet what we call this bar! Whatever it is, I would be eternally grateful for any advice on how to remove this terrible thing my browser (the more permanent solution, the better it is). Thanks in advance

    Try to go into Tools> >Web Developer> > now uncheck all options

    More:

  • How to remove 60 million lines

    Hi all

    I have table tb1 with 60 million and I have another painting tb2 with 30 million

    I want to delete tab2 regarding the one table

    For this I wrote a procedure to dellete this. but when iam trying to run after 4 minutes of the procedure does not work due to large data:

     PROCEDURE temp (id IN NUMBER)
    AS
       CURSOR c
       IS
          SELECT order
            FROM tab1
           WHERE (status = 'A'
                  AND billdt < ADD_MONTHS (SYSTIMESTAMP, -12))
                 OR (status IN ('B','C')
                     AND orderdt < ADD_MONTHS (SYSTIMESTAMP, -12));
    
       TYPE ordernum IS TABLE OF tab1.order%TYPE;
    
       order1   ordernum;
    BEGIN
    
       OPEN c;
    
       LOOP
          FETCH c
          BULK COLLECT INTO order1
          LIMIT 1000;
    
          FORALL i IN 1 .. order1.COUNT
             DELETE FROM tb2
                   WHERE id = order1 (i);
    
          EXIT WHEN c%NOTFOUND;
       END LOOP;
       COMMIT;
        EXCEPTION
          WHEN OTHERS
          THEN
             ROLLBACK;
           END ;
           
    can someone help me!

    Thanks in advance

    When possible, you should try a solution based game (for example, a SQL statement).

    This gives a shot:

    DELETE FROM tb2
    WHERE  order IN ( SELECT order
                      FROM   tab1
                      WHERE  (   status = 'A'
                             AND billdt < ADD_MONTHS(SYSTIMESTAMP, -12)
                             )
                      OR     (   status IN ('B','C')
                             AND orderdt < ADD_MONTHS(SYSTIMESTAMP, -12)
                             )
                    )
    ;
    

    If it is still too slow, so we have to look at alternative methods. These threads can be interesting to read this way.

    {message: id = 1812597}

    {: identifier of the thread = 863295}

  • gray clip verticle lines when loading project

    Hello

    I have a project shot on a Sony EX3. When I load the project, some of my deadlines have gray bar stripes slash across the clip and will not play. The film is about an internal hard drive. The problem affects some deadlines of the project.

    Any help is appreciated.

    Thank you

    Scott

    The gray hash marks means that your film has been disconnected. This seems to happen occasionally with assets based on files like those of the EX3 (it happens from time to time with my P2 MXFs). I think something gets tweaked during the indexing of the process media, and that you've found, a simple restart of the program usually brings them online. I blame gremlins.

  • Add the new column on 25 million lines Table

    Hi gurus,

    I have to add the new column to a populated application table who get to each Transaction.
    We can do this with no interruption of service and what are the things that I need to look after adding the peak of performance.
    Something happened to my mind is re - run stats

    Any Suggestion

    Database version = 10.2.0.4
    OS = RHEL4

    I appreciate your help on this
    Thank you

    789816 wrote:
    If the GET of the table is locked, it means that the application cannot write any new transactions in the table until the new default values has been updated?, my understanding of the downtime to do this to

    *-Online Yes, you need downtime in 10 g *.

    Another Question if we have default values as NULL will always be the table get locked?

    *-ONLINE YES *.

    11 g, this problem has been fixed by oracle: http://www.oracle-class.com/?p=1943

    create table sales as
       select rownum as id,
       mod(rownum,5) as product_id,
     mod(rownum,100) as client_id,
     2000 as price,
     to_date(
     '01.'||lpad(to_char(mod(rownum,3)+1),2,'0')||'.2011','dd.mm.yyyy')
     as c_date
     from dual
     10   connect by level <= 2.5e5;
    
    Table created.
    
    SQL> select count(*) from sales;
    
      COUNT(*)
    ----------
        250000
    
    ----session 1
    SQL> begin
      2  for i in 1..100000 loop
      3  insert into sales (id) values(i);
      4  end loop;
      5  commit;
      6  end;
      7  /
    
    PL/SQL procedure successfully completed.
    
    SQL> 
    
    When inserting into the table sales, the alter table add the new column was waiting for the inserts to finish
    
    -----session2:
    SQL> alter table sales add (provider_id NUMBER(2));
    
    Table altered.
    
    SQL> 
    
  • SQL Loader - ignore the lines with "rejected - all null columns."

    Hello

    Please see the attached log file. Also joined the table creation script, data file and the bad and throw the files after execution.

    Sqlldr customer in the version of Windows-

    SQL * Loader: release 11.2.0.1.0 - Production

    The CTL file has two clauses INTO TABLE due to the nature of the data. The data presented are a subset of data in the real world file. We are only interested in the lines with the word "Index" in the first column.

    The problem we need to do face is, according to paragraph INTO TABLE appears first in the corresponding CTL lines file to the WHEN CLAUSE it would insert and the rest get discarded.

    1. statement of Create table : create table dummy_load (varchar2 (30) name, number, date of effdate);

    2. data file to simulate this issue contains the lines below 10. Save this as name.dat. The intention is to load all of the rows in a CTL file. The actual file would have additional lines before and after these lines that can be discarded.

    H15T1Y Index | 2. 19/01/2016 |

    H15T2Y Index | 2. 19/01/2016 |

    H15T3Y Index | 2. 19/01/2016 |

    H15T5Y Index | 2. 19/01/2016 |

    H15T7Y Index | 2. 19/01/2016 |

    H15T10Y Index | 2. 19/01/2016 |

    CPDR9AAC Index | 2. 15/01/2016 |

    MOODCAVG Index | 2. 15/01/2016 |

    H15TXXX Index | 2. 15/01/2016 |

    H15TXXX Index | 2. 15/01/2016 |

    3. the CTL file - name.ctl

    DOWNLOAD THE DATA

    ADD

    IN THE TABLE dummy_load

    WHEN (09:13) = "Index".

    TRAILING NULLCOLS

    (

    COMPLETED name BY ' | ',.

    rate TERMINATED BY ' | '.

    COMPLETED effdate BY ' | '. ' TO_DATE (: effdate, "MM/DD/YYYY").

    )

    IN THE TABLE dummy_load

    WHEN (08:12) = "Index".

    TRAILING NULLCOLS

    (

    COMPLETED name BY ' | ',.

    rate TERMINATED BY ' | '.

    COMPLETED effdate BY ' | '. ' TO_DATE (: effdate, "MM/DD/YYYY").

    )

    invoke SQL loader in a file-> beats

    C:\Oracle\product\11.2.0\client\bin\sqlldr USERID = myid/[email protected] CONTROL=C:\temp\t\name.ctl BAD=C:\temp\t\name_bad.dat LOG=C:\temp\t\name_log.dat DISCARD=C:\temp\t\name_disc.dat DATA=C:\temp\t\name.dat

    Once this is run, the following text appears in the log file (excerpt):

    Table DUMMY_LOAD, charged when 09:13 = 0X496e646578 ('Index' character)

    Insert the option in effect for this table: APPEND

    TRAILING NULLCOLS option in effect

    Column Position Len term Encl. Datatype name

    ------------------------------ ---------- ----- ---- ---- ---------------------

    NAME                                FIRST     *   |       CHARACTER

    RATE                                 NEXT     *   |       CHARACTER

    EFFDATE NEXT * |       CHARACTER

    SQL string for the column: ' TO_DATE (: effdate, "MM/DD/YYYY").

    Table DUMMY_LOAD, charged when 08:12 = 0X496e646578 ('Index' character)

    Insert the option in effect for this table: APPEND

    TRAILING NULLCOLS option in effect

    Column Position Len term Encl. Datatype name

    ------------------------------ ---------- ----- ---- ---- ---------------------

    NAME                                 NEXT     *   |       CHARACTER

    RATE                                 NEXT     *   |       CHARACTER

    EFFDATE NEXT * |       CHARACTER

    SQL string for the column: ' TO_DATE (: effdate, "MM/DD/YYYY").

    Record 1: Ignored - all null columns.

    Sheet 2: Cast - all null columns.

    Record 3: Ignored - all null columns.

    Record 4: Ignored - all null columns.

    Sheet 5: Cast - all null columns.

    Sheet 7: Discarded - failed all WHEN clauses.

    Sheet 8: Discarded - failed all WHEN clauses.

    File 9: Discarded - failed all WHEN clauses.

    Case 10: Discarded - failed all WHEN clauses.

    Table DUMMY_LOAD:

    1 row loaded successfully.

    0 rows not loaded due to data errors.

    9 lines not loading because all WHEN clauses were failed.

    0 rows not populated because all fields are null.

    Table DUMMY_LOAD:

    0 rows successfully loaded.

    0 rows not loaded due to data errors.

    5 rows not loading because all WHEN clauses were failed.

    5 rows not populated because all fields are null.


    The bad file is empty. The discard file has the following

    H15T1Y Index | 2. 19/01/2016 |

    H15T2Y Index | 2. 19/01/2016 |

    H15T3Y Index | 2. 19/01/2016 |

    H15T5Y Index | 2. 19/01/2016 |

    H15T7Y Index | 2. 19/01/2016 |

    CPDR9AAC Index | 2. 15/01/2016 |

    MOODCAVG Index | 2. 15/01/2016 |

    H15TXXX Index | 2. 15/01/2016 |

    H15TXXX Index | 2. 15/01/2016 |


    Based on the understanding of the instructions in the CTL file, ideally the first 6 rows will have been inserted into the table. Instead the table comes from the line 6' th.

    NAMERATEEFFDATE
    H15T10Y Index2January 19, 2016



    If the INTO TABLE clauses were put in the CTL file, then the first 5 rows are inserted and the rest are in the discard file. The line 6' th would have a ""rejected - all columns null. "in the log file. "


    Could someone please take a look and advise? My apologies that the files cannot be attached.

    Unless you tell it otherwise, SQL * Loader assumes that each later in the table and what clause after the first back in the position where the previous left off.  If you want to start at the beginning of the line every time, then you need to reset the position using position (1) with the first column, as shown below.  Position on the first using is optional.

    DOWNLOAD THE DATA

    ADD

    IN THE TABLE dummy_load

    WHEN (09:13) = "Index".

    TRAILING NULLCOLS

    (

    name POSITION (1) TERMINATED BY ' | '.

    rate TERMINATED BY ' | '.

    COMPLETED effdate BY ' | '. ' TO_DATE (: effdate, "MM/DD/YYYY").

    )

    IN THE TABLE dummy_load

    WHEN (08:12) = "Index".

    TRAILING NULLCOLS

    (

    name POSITION (1) TERMINATED BY ' | '.

    rate TERMINATED BY ' | '.

    COMPLETED effdate BY ' | '. ' TO_DATE (: effdate, "MM/DD/YYYY").

    )

  • Load table data into Oracle Essbase

    Hi all

    I have a problem

    1 oracle table have 50 million records

    Essbase load data from that

    ODI error message:

    Caused by: java.sql.BatchUpdateException: means: Java heap space

    at org.hsqldb.jdbc.JDBCPreparedStatement.executeBatch (unknown Source)

    at oracle.odi.runtime.agent.execution.sql.BatchSQLCommand.execute(BatchSQLCommand.java:44)

    at oracle.odi.runtime.agent.execution.sql.SQLExecutor.execute(SQLExecutor.java:102)

    at oracle.odi.runtime.agent.execution.sql.SQLExecutor.execute(SQLExecutor.java:1)

    at oracle.odi.runtime.agent.execution.DataMovementTaskExecutionHandler.handleTask(DataMovementTaskExecutionHandler.java:87)

    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.processTask(SnpSessTaskSql.java:2913)

    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2625)

    at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:577)

    at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:468)

    at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:2128)

    to oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$ 2.doAction(StartSessRequestProcessor.java:366)

    at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:216)

    at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:300)

    to oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$ 0 (StartSessRequestProcessor.java:292)

    to oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$ StartSessTask.doExecute (StartSessRequestProcessor.java:855)

    at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:126)

    to oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$ 2.run(DefaultAgentTaskExecutor.java:82)

    at java.lang.Thread.run(Thread.java:662)

    I think that Agent Load data so great, if memory cannot load.

    How to fix?

    Please give me a solution.

    Thank you

    As Craig said, move the staging area out the SUNOPSIS MEMORY ENGINE initially. This should only be considered if you are mobile / transform small amounts of data (including 50 million lines isn't :-)).) Why you do not set the staging on your Source Oracle DB in this way, you remove an unnecessary data movement i.e. the LKM and don't rely on memory in engine.

  • ORA-04030: memory of process when you try to insert rows 12 million

    Hi all
    I use 11.2.0.2 RAC database and use under function to insert some 12 million lines
    CREATE OR REPLACE FUNCTION DEPLOY_CTL.f_stg_refined_play_stop_tv (NO_DAYS IN NUMBER,
    col_date DATE)

    RETURN NUMBER
    AS

    RTN_SUCCESS CONSTANT NUMBER (1): = 0;
    RTN_FAILURE CONSTANT NUMBER (1): = 1;
    RTN_NON_FATAL NUMBER (1) CONSTANT: = 2;

    CURSOR stg_refined_play_stop_tv_cur IS
    SELECT * from dwh_stg.stg_refined_play_stop_tv
    When trunc (EVENT_DT_TM) = col_date
    order of batch_id;

    TYPE stgrefinedplaystoptvarr IS TABLE OF stg_refined_play_stop_tv_cur % ROWTYPE directory index;

    stgrefinedplaystoptv_rec stgrefinedplaystoptvarr;

    l_batch_id NUMBER: = 0;
    v_ctr NUMBER: = 1;
    NUMBER of v_ctr1: = 1;
    l_temp_batch_id number: = 0;
    l_date DATE;
    l_batch_ctr NUMBER: = 0;
    l_date_no_days DATE;

    BEGIN

    -immediate 'truncate table t_stg_refined_play_stop_tv;
    -immediate execution "alter session set NLS_DATE_FORMAT =" dd/mm/yyyy "';"

    l_date_no_days: = to_date(col_date,'dd/mm/yyyy');


    OPEN stg_refined_play_stop_tv_cur.
    LOOP
    -GET the stg_refined_play_stop_tv_cur COLLECT in BULK IN stgrefinedplaystoptv_rec;
    -OUTPUT
    Get the stg_refined_play_stop_tv_cur COLLECT in BULK IN stgrefinedplaystoptv_rec;
    OUTPUT - WHEN stgrefinedplaystoptv_rec. COUNT = 0;
    WHEN stg_refined_play_stop_tv_cur % NOTFOUND;

    END LOOP;

    CLOSE Stg_refined_play_stop_tv_cur;

    1.no_days j
    loop


    l_batch_ctr: = l_batch_ctr + 1;
    l_date_no_days: = to_date(l_date_no_days,'dd/mm/yyyy') - 1;


    I'm in stgrefinedplaystoptv_rec.first... stgrefinedplaystoptv_rec. Last
    loop

    l_date: = to_date (trunc (l_date_no_days) |) ' ' || TO_CHAR (stgrefinedplaystoptv_rec (i). (EVENT_DT_TM, "hh24:mi:ss"), "hh24:mi:ss dd/mm/yyyy");



    IF l_temp_batch_id! = stgrefinedplaystoptv_rec (i) .batch_id and l_batch_ctr = 1
    then
    l_batch_id: = stgrefinedplaystoptv_rec (i) .batch_id - v_ctr;
    l_temp_batch_id: = stgrefinedplaystoptv_rec (i) .batch_id;
    v_ctr: = v_ctr + 2;

    elsif l_temp_batch_id! = stgrefinedplaystoptv_rec (i) .batch_id and l_batch_ctr > 1
    then
    l_batch_id: = l_batch_id - v_ctr1;
    l_temp_batch_id: = stgrefinedplaystoptv_rec (i) .batch_id;

    on the other
    l_temp_batch_id: = stgrefinedplaystoptv_rec (i) .batch_id;

    end if;

    stgrefinedplaystoptv_rec (i). EVENT_DT_TM: = l_date;
    stgrefinedplaystoptv_rec (i) .batch_id: = l_batch_id;
    stgrefinedplaystoptv_rec (i). DWH_CREATE_DT: = SYSDATE;
    -stgrefinedplaystoptv_rec (i). REFINED_STG_ID: = VERSION_ALL_SEQ.nextval STG;
    Select t_stg_refined_play_stop_tv.nextval from stgrefinedplaystoptv_rec (i). Double REFINED_STG_ID;

    end loop;



    FORALL an IN 1.stgrefinedplaystoptv_rec. COUNTY
    SAVE EXCEPTIONS

    insert into DWH_STG. Stgrefinedplaystoptv_rec (a) STG_REFINED_PLAY_STOP_TV of values;
    commit;




    end loop;

    RETURN RTN_SUCCESS;

    EXCEPTION
    WHILE OTHERS THEN
    DBMS_OUTPUT. PUT_LINE ('SQLCODE' |) SQLCODE. ' '|| SQLERRM);
    RETURN RTN_FAILURE;

    END f_stg_refined_play_stop_tv;
    /


    SQL > declare
    number of l_return;
    Start
    l_return: = f_stg_refined_play_stop_tv (1, to_date ('2011-14-06', ' dd/mm/yyyy'));
    end; 2 3 4 5
    6.
    SQLCODE-4030 ORA-04030: out of process memory when trying to allocate 16328 bytes (call koh-kghu, pmuccst: adt/registration)

    PL/SQL procedure successfully completed.

    Elapsed time: 00:00:58.37
    SQL >



    SQL > select component, current_size, min_size, max_size from v$ memory_dynamic_components;

    COMPONENT CURRENT_SIZE MIN_SIZE MAX_SIZE
    ---------------------------------------------------------------- ------------------- ------------------- -------------------
    Shared pool 1275068416 872415232 1275068416
    large pool 67108864 67108864 67108864
    pool of Java 67108864 67108864 67108864
    rivers of pool 0 0 0
    LMS 14763950080 14763950080 14763950080 target
    Cache buffer by DEFAULT 13220446208 13220446208 13220446208
    KEEP the cache buffers 0 0 0
    RECYCLE the cache buffers 0 0 0
    Cache buffer by DEFAULT 2K 0 0 0
    Cache buffer by DEFAULT 4K 0 0 0
    Cache buffer by DEFAULT 8K 0 0 0

    COMPONENT CURRENT_SIZE MIN_SIZE MAX_SIZE
    ---------------------------------------------------------------- ------------------- ------------------- -------------------
    Cache buffer by DEFAULT 16K 0 0 0
    Cache buffer by DEFAULT 32K 0 0 0
    Shared Pool IO 0 0 0
    Target 9932111872 9932111872 9932111872 PGA
    Cache buffers ASM 0 0 0

    16 selected lines.

    Elapsed time: 00:00:00.25
    SQL > see the memory settings

    VALUE OF TYPE NAME
    ------------------------------------ ----------- ------------------------------
    hi_shared_memory_address integer 0
    whole big memory_max_target 25G
    memory_target large whole 23G
    shared_memory_address integer 0


    I'm on the automatic management of the memory of 11g feature. Please can I know how to solve this problem.

    Thank you very much

    The answers to this question are the same as the last time he was asked:
    Re: ORA-04030: out of process memory when inserting 12 million ranks 11 g

    You should not, regardless of the manual or automatic memory parameters, load up to 12 million lines in one fell swoop in a collection.
    Especially not when you can probably rewrite it is a single INSERT SELECT statement.

Maybe you are looking for

  • Best Macbook for Uni/accounting?

    Hi, I'm hoping to go to University the year next to study accounting and finance. Yesterday I bought the new gray Space 12 '' Macbook Online (without testing in-store), however I am now questioning if it was the right decision, as almost all the crit

  • Can I install a graphics card on Satellite M70?

    I have a Toshiba Satellite M70-394 (psm70e). I realized I have a PCIe location in my system with a hardware identifier program. Is it possible that I can install a graphics in the back, if I unscrew the back or get a professional to do?

  • Problem with the wireless on the system tray icon.

    Original title: wireless not found wirreless icon in the taskbar

  • firmware update is not 'take '.

    I tried to update the firmware of V01.01.30A to the new version (. 01.01.32).  I used the program to update the firmware as also the manual download.  The update program went through the download process and said that my camera is now updated with th

  • I can't download games more

    I can't download anythin