On bulk collect forall vs fusion simple statement

I understand that a single DML statement is better that use bulk collect to have all the intermediary undertakes. My only concern is that if I load a large amount of data as a record 100 million records in a 800 million table with foreign keys and indexes and the session is killed, the cancellation may take some time which is not acceptable. Using bulk collect forall with interval of validations is slower than a declaration unique fusion straight, but in the case of dead session, the restore time will be not too bad and a reload of the not yet committed data will be not as bad. For the design of a load of recoverable data which may not be affected as badly, is in bulk collect + for all the right approach?

So if I chunk it upward in 50 lines, the child table must be loaded to its matching when records the parent table loaded and validate them.

... and then create a procedure that takes care of the parent AND child data at the same time.

SQL for DBMS_PARALLEL_EXECUTE would be:

"start load_parent_and_child (: start_id,: end_id); end; »

PS - you don't want to run ECD and DML PARALLEL at the same time...

MK

Tags: Database

Similar Questions

  • Bulk collect / forall type what collection?

    Hi I am trying to speed up the query below using bulk collect / forall:

    SELECT h.cust_order_no AS custord, l.shipment_set AS Tess
    Info.tlp_out_messaging_hdr h, info.tlp_out_messaging_lin l
    WHERE h.message_id = l.message_id
    AND h.contract = '12384'
    AND l.shipment_set IS NOT NULL
    AND h.cust_order_no IS NOT NULL
    H.cust_order_no GROUP, l.shipment_set

    I would like to get the 2 selected fields above in a new table as quickly as possible, but I'm pretty new to Oracle and I find it hard to sort out the best way to do it. The query below is not working (no doubt there are many issues), but I hope that's sufficiently developed, shows the sort of thing, I am trying to achieve:

    DECLARE
    TYPE xcustord IS TABLE OF THE info.tlp_out_messaging_hdr.cust_order_no%TYPE;
    TYPE xsset IS TABLE OF THE info.tlp_out_messaging_lin.shipment_set%TYPE;
    TYPE xarray IS the TABLE OF tp_a1_tab % rowtype INDEX DIRECTORY.
    v_xarray xarray;
    v_xcustord xcustord;
    v_xsset xsset;
    CUR CURSOR IS
    SELECT h.cust_order_no AS custord, l.shipment_set AS Tess
    Info.tlp_out_messaging_hdr h, info.tlp_out_messaging_lin l
    WHERE h.message_id = l.message_id
    AND h.contract = '1111'
    AND l.shipment_set IS NOT NULL
    AND h.cust_order_no IS NOT NULL;
    BEGIN
    Heart OPEN;
    LOOP
    News FETCH
    LOOSE COLLECTION v_xarray LIMIT 10000;
    WHEN v_xcustord EXIT. COUNT() = 0;
    FORALL I IN 1... v_xarray. COUNTY
    INSERT INTO TP_A1_TAB (cust_order_no, shipment_set)
    VALUES (v_xarray (i) .cust_order_no, v_xarray (i) .shipment_set);
    commit;
    END LOOP;
    CLOSE cur;
    END;

    I'm running on Oracle 9i release 2.

    Short-term solution may be to a world point of view. Pay once per hour for the slow and complex query execution. Materialize the results in a table (with clues in support of queries on the materialized view).

    Good solution - analysis logic and SQL, determine what he does, how he does it and then figure out how this can be improved.

    Ripping separate cursors in SQL and PL/SQL code injection to stick together, are a great way to make performance even worse.

  • DBMS_OUTPUT in BULK COLLECT FORALL

    Hello

    I'm trying to figure out how I can output using DBMS_OUTPUT. Put_line in a BULK COLLECT / FORALL update?

    Example:

    SEARCH REF_CURSOR BULK COLLECT INTO l_productid, l_qty
    ForAll indx in l_productid.first... l_productid. Last


    Aa products update
    The value of aa. LastInventorySent = l_qty (indx)
    Where aa.productid = l_productid (indx);

    DBMS_OUTPUT. Put_line ("ProductID: ' |") l_productid (indx) | "QTY: ' |" l_qty (indx);

    Is this possible? If so how I can accomlish this?

    Thank you

    S
    FETCH REF_CURSOR BULK COLLECT INTO l_productid,l_qty
    forall indx in l_productid.first..l_productid.last
    Update products aa
    Set aa.LastInventorySent = l_qty(indx)
    Where aa.productid = l_productid(indx);
    for indx in 1..l_qty.count loop
     DBMS_OUTPUT.PUT_LINE('ProductID: ' || l_productid(indx)|| ' QTY: ' || l_qty(indx);
    end loop;
    

    SY.

  • Bulk collect ForALL - error when insert via DBLink

    HY everybody,

    I have two databases 9i on two servers. DB1 and DB2.

    The DB1 is a DBLink in DB2.

    I insert values into a DB2 table with the values in the tables of db1.

    In proceedings of DB1, I have this code:

    DECLARE
    TYPE TEExtFinanceiro IS TABLE OF EExtFinanceiro@sigrhext%ROWTYPE INDEX OF PLS_INTEGER;
    eExtFinanceiroTab TEExtFinanceiro;
    Start

    ....

    IF eExtFinanceiroTab.count > 0 THEN

    FORALL vIndice IN eExtFinanceiroTab.First... eExtFinanceiroTab.Last
    INSERT INTO eExtFinanceiro@sigrhExt VALUES (vIndice) eExtFinanceiroTab;

    COMMIT;

    END IF;

    ...
    END;

    The fields in the eExtFinanceiro table are nullable.

    This command inserts the rows in the eExtFinanceiro table, but all lines are null.

    What could happen?
    Can someone help me, please?

    Thank you!

    Hello

    FORALL has a limitation, it does not work on a DBLink.
    Operations block (such as FORALL) are used to minimize the change between PL/SQL and SQL context engines.
    For remote database operations, there is no change of context SINCE each DB link must open is no new connection.
    In your scenario, it will be much better to use an approach to PULL rather than PUSH. This way you can get to use FORALL as well.

    As in:
    Having a procedure in DB2 (instead of having it in DB1). This way used inside FORALL DML will not have a DBLink

    I hope that helps!

    See you soon,.
    AA

  • Insert bulk collect unique VS

    Hi guys, just a quick question (I hope) on this subject.

    I have a table (T1) containing 4 000 000 ID lines (not all unique). I was going to create a cursor pulling to back all the distinct id of this table, find the id in T2 and then insert these T2 T3 data (the table to archive for T2).

    I was going to make this loop in bulk collect/forall, with some limits.

    It would be faster than a simple as DML below:

    INSERT into T3
    Select * from T2 where T2.id = (select distinct T1.id from T1);

    The id on T1 and T2 column is indexed and is the primary key, and I expect there to be about 130 000 inserts in T3.

    Hello

    If you want to find the optimal method, you need to watch the execution plans initially to see what the differences are. I was going to just post the following without checking exactly what the optimizer and I would have suggested...

    What you're likely to find for these two queries

    select t1.* from table1 t1
    where t1.id in (select distinct id from table2);
    
    select t1.* from table1 t1,(select distinct id from table2) t2
    where t1.id=t2.id;
    

    Unique or the hash is unique on the table2.  While with

    select t1.* from table1 t1
    where exists (select 1 from table t2
    where t1.id=t2.id);
    

    Likely a join semi without the need of a sort.

    However, I decided to check, and this is the result

    DTYLER_APP@pssdev2> create table table1 as select rownum id, lpad(' ',150) padding from dual connect by level <=1000000
      2  /
    
    Table created.
    
    DTYLER_APP@pssdev2> create table table2 as select * from table1;
    
    Table created.
    
    DTYLER_APP@pssdev2> insert into table2 select * from table1;
    
    1000000 rows created.
    
    DTYLER_APP@pssdev2> commit;
    
    Commit complete.
    
    DTYLER_APP@pssdev2> explain plan for
      2  select t1.* from table1 t1
      3  where t1.id in (select distinct id from table2);
    
    Explained.
    
    DTYLER_APP@pssdev2> select * from table(dbms_xplan.display)
      2  /
    
    PLAN_TABLE_OUTPUT
    ---------------------------------------------------------------------------------------
    Plan hash value: 1399795188
    
    ---------------------------------------------------------------------------------------
    | Id  | Operation            | Name   | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    ---------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT     |        |   963K|    94M|       | 19284   (1)| 00:03:52 |
    |*  1 |  HASH JOIN RIGHT SEMI|        |   963K|    94M|    46M| 19284   (1)| 00:03:52 |
    |   2 |   TABLE ACCESS FULL  | TABLE2 |  1951K|    24M|       |  8156   (1)| 00:01:38 |
    |   3 |   TABLE ACCESS FULL  | TABLE1 |   963K|    82M|       |  4148   (1)| 00:00:50 |
    ---------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       1 - access("T1"."ID"="ID")
    
    Note
    -----
       - dynamic sampling used for this statement
    
    19 rows selected.
    
    DTYLER_APP@pssdev2> explain plan for
      2  select t1.* from table1 t1,(select distinct id from table2) t2
      3  where t1.id=t2.id;
    
    Explained.
    
    DTYLER_APP@pssdev2> select * from table(dbms_xplan.display)
      2  /
    
    PLAN_TABLE_OUTPUT
    ---------------------------------------------------------------------------------------
    Plan hash value: 4031624413
    
    ---------------------------------------------------------------------------------------
    | Id  | Operation            | Name   | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    ---------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT     |        |  1951K|   191M|       | 28606   (1)| 00:05:44 |
    |*  1 |  HASH JOIN           |        |  1951K|   191M|    46M| 28606   (1)| 00:05:44 |
    |   2 |   VIEW               |        |  1951K|    24M|       | 17473   (1)| 00:03:30 |
    |   3 |    HASH UNIQUE       |        |  1951K|    24M|    74M| 17473   (1)| 00:03:30 |
    |   4 |     TABLE ACCESS FULL| TABLE2 |  1951K|    24M|       |  8156   (1)| 00:01:38 |
    |   5 |   TABLE ACCESS FULL  | TABLE1 |   963K|    82M|       |  4148   (1)| 00:00:50 |
    ---------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       1 - access("T1"."ID"="T2"."ID")
    
    Note
    -----
       - dynamic sampling used for this statement
    
    21 rows selected.
    
    DTYLER_APP@pssdev2> explain plan for
      2  select t1.* from table1 t1
      3  where exists (select 1 from table2 t2
      4  where t1.id=t2.id);
    
    Explained.
    
    DTYLER_APP@pssdev2> select * from table(dbms_xplan.display)
      2  /
    
    PLAN_TABLE_OUTPUT
    ---------------------------------------------------------------------------------------
    Plan hash value: 1399795188
    
    ---------------------------------------------------------------------------------------
    | Id  | Operation            | Name   | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    ---------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT     |        |   963K|    94M|       | 19284   (1)| 00:03:52 |
    |*  1 |  HASH JOIN RIGHT SEMI|        |   963K|    94M|    46M| 19284   (1)| 00:03:52 |
    |   2 |   TABLE ACCESS FULL  | TABLE2 |  1951K|    24M|       |  8156   (1)| 00:01:38 |
    |   3 |   TABLE ACCESS FULL  | TABLE1 |   963K|    82M|       |  4148   (1)| 00:00:50 |
    ---------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       1 - access("T1"."ID"="T2"."ID")
    
    Note
    -----
       - dynamic sampling used for this statement
    
    19 rows selected.
    

    So, this shows that the IN has been transformed into EXISTS and there is no need of unique hash or sort. The version with the inline view was not transformed in the same way and required the unique HASH. So with this in mind, it would seem that this query 2 is the least effective, because it requires the additional unique operation.

    HTH

    David

    Published by: Bravid on 13 Sep 2011 09:55

    Published by: Bravid on 13 Sep 2011 09:56

  • Need help with massive upgrade collection ForAll

    HI - I'm doing an update of the collection/ForAll in bulk but will have questions.

    My statements look like this:
         CURSOR cur_hhlds_for_update is
            SELECT hsh.household_id, hsh.special_handling_type_id
              FROM compas.household_special_handling hsh
                 , scr_id_lookup s
             WHERE hsh.household_id = s.id
               AND s.scr = v_scr
               AND s.run_date = TRUNC (SYSDATE)
               AND effective_date IS NULL
               AND special_handling_type_id = 1
               AND created_by != v_user;
    
         TYPE rec_hhlds_for_update IS RECORD (
              household_id  HOUSEHOLD_SPECIAL_HANDLING.household_id%type,
              spec_handl_type_id HOUSEHOLD_SPECIAL_HANDLING.SPECIAL_HANDLING_TYPE_ID%type
              );
    
         TYPE spec_handling_update_array IS TABLE OF rec_hhlds_for_update;
         l_spec_handling_update_array  spec_handling_update_array;
    And then Bulk Collect/ForAll looks like this:
           OPEN cur_hhlds_for_update;
           LOOP
           
            FETCH cur_hhlds_for_update BULK COLLECT INTO l_spec_handling_update_array LIMIT 1000;
            EXIT WHEN l_spec_handling_update_array.count = 0;
    
            FORALL i IN 1..l_spec_handling_update_array.COUNT
              
            UPDATE compas.household_special_handling
               SET effective_date =  TRUNC(SYSDATE)
                 , last_modified_by = v_user
                 , last_modified_date = SYSDATE
             WHERE household_id = l_spec_handling_update_array(i).household_id
               AND special_handling_type_id = l_spec_handling_update_array(i).spec_handl_type_id;
    
              l_special_handling_update_cnt := l_special_handling_update_cnt + SQL%ROWCOUNT;          
              
          END LOOP;
    And this is the error I get:
    ORA-06550: line 262, column 31:
    PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND table of records
    ORA-06550: line 262, column 31:
    PLS-00382: expression is of wrong type
    ORA-06550: line 263, column 43:
    PL/SQL: ORA-22806: not an object or REF
    ORA-06550: line 258, column 9:
    PL/SQL: SQ
    My problem is that the update table contains a composite primary key so I have two conditions in my where clause. This the first time I try even the update of the collection/ForAll in bulk and it seems that it would be straight forward if I was only dealing with a single column primary key. Can someone please help advise me as to what I'm missing here or how can I achieve this?

    Thank you!
    Christine

    You cannot reference a column within a record when doin one for all. You need to refer as a collection. So you'll need two collections.

    Try like this,

    DECLARE
       CURSOR cur_hhlds_for_update
       IS
          SELECT hsh.household_id, hsh.special_handling_type_id
            FROM compas.household_special_handling hsh, scr_id_lookup s
           WHERE hsh.household_id = s.ID
             AND s.scr = v_scr
             AND s.run_date = TRUNC (SYSDATE)
             AND effective_date IS NULL
             AND special_handling_type_id = 1
             AND created_by != v_user;
    
       TYPE arr_household_id IS TABLE OF HOUSEHOLD_SPECIAL_HANDLING.household_id%TYPE
                                   INDEX BY BINARY_INTEGER;
    
       TYPE arr_spec_handl_type_id IS TABLE OF HOUSEHOLD_SPECIAL_HANDLING.SPECIAL_HANDLING_TYPE_ID%TYPE
                                         INDEX BY BINARY_INTEGER;
    
       l_household_id_col         arr_household_id;
       l_spec_handl_type_id_col   arr_spec_handl_type_id;
    BEGIN
       OPEN cur_hhlds_for_update;
    
       LOOP
          FETCH cur_hhlds_for_update
            BULK COLLECT INTO l_household_id_col, l_spec_handl_type_id_col
          LIMIT 1000;
    
          EXIT WHEN cur_hhlds_for_update%NOTFOUND;
    
          FORALL i IN l_household_id_col.FIRST .. l_household_id_col.LAST
             UPDATE compas.household_special_handling
                SET effective_date = TRUNC (SYSDATE),
                    last_modified_by = v_user,
                    last_modified_date = SYSDATE
              WHERE household_id = l_household_id_col(i)
                AND special_handling_type_id = l_spec_handl_type_id_col(i);
       --l_special_handling_update_cnt := l_special_handling_update_cnt + SQL%ROWCOUNT; -- Not sure what this does.
    
       END LOOP;
    END;
    

    G.

  • PLS-00201: identifier 'i' must be declared when using BULK COLLECT with FORALL to insert data in 2 tables?

    iHi.

    Declare
       cursor c_1
       is
        select col1,col2,col3,col4
        from table1
    
    
       type t_type is table of c_1%rowtype index by binary_integer;
       v_data t_type;
    BEGIN
       OPEN c_1;
       LOOP
          FETCH c_1 BULK COLLECT INTO v_data LIMIT 200;
          EXIT WHEN v_data.COUNT = 0;
          FORALL i IN v_data.FIRST .. v_data.LAST
             INSERT INTO xxc_table
               (col1,
                col3,
                col4
               )
                SELECT v_data (i).col1,
                       v_data (i).col3,
                       v_data (i).col4
                  FROM DUAL
                 WHERE NOT EXISTS
                              (SELECT 1
                                 FROM xxc_table a
                                WHERE col1=col1
                                      .....
                              );
                         --commit;
             INSERT INTO xxc_table1
               (col1,
               col2,
              col3,
              col4
               )
                SELECT v_data (i).col1,
                       v_data (i).col2,
                       v_data (i).col3,
                       'Y'
                  FROM DUAL
                 WHERE NOT EXISTS
                              (SELECT 1
                                 FROM xxc_table1 a
                                WHERE col1=col1
          .....
         );
    
    
           --exit when c_1%notfound;
       END LOOP;
       CLOSE c_1;
       commit;
    END;
    
    
    
    
    
    
    
    

    I get 40/28-PLS-00201: identifier 'I' must be declared what the problem in the above code please help me and I have lakhs of data

    Thank you

    Post edited by: Rajesh123 I changed IDX

    Post edited by: Rajesh123 changed t_type c_1 in Fetch

    But by using a SET of INSERT to insert into two tables at once in the same query would do the job without any collection of bulk of PL and avoid to query two times too.

    for example, as a single INSERT...

    SQL > create table table1 as
    2. Select 1 as col1, col2 of 1, 1 as col3, 1 as col4 Union double all the
    3 select 2,2,2,2 of all the double union
    4 Select 3,3,3,3 Union double all the
    5 Select 4,4,4,4 of all the double union
    6 select 5,5,5,5 of all the double union
    7 select 6,6,6,6 of all the double union
    8 select 7,7,7,7 of all the double union
    9 select 8,8,8,8 of all the double union
    10. Select 9,9,9,9 to the Union double all the
    11. Select double 10,10,10,10
    12.

    Table created.

    SQL > create table xxc_table like
    2. Select 1 as col1, col3 2, 3 as col4 Union double all the
    3. Select the 3, 4, 5 Union double all the
    4. Select the 5, 6, 7 double
    5.

    Table created.

    SQL > create table xxc_table1 like
    2. Select 3 as col1, col2, col3, 5 4 "n" as col4 Union double all the
    3. Select the 6, 7, 8, double "n"
    4.

    Table created.

    SQL > insert all
    2 when the xt_insert is null then
    3 in xxc_table (col1, col3, col4)
    4 values (col1, col3, col4)
    5 when the xt1_insert is null then
    6 in xxc_table1 (col1, col2, col3, col4)
    7 values (col1, col2, col3, 'Y')
    8. Select t1.col1 t1.col2, t1.col3, t1.col4
    9, xt.col1 as xt_insert
    10, xt1.col1 as xt1_insert
    11 from table1 t1
    12 left join external xxc_table xt (t1.col1 = xt.col1)
    13 left xt1 xxc_table1 outer join (t1.col1 = xt1.col1)
    14.

    15 rows created.

    SQL > select * from xxc_table by 1.
    COL1 COL3 COL4
    ---------- ---------- ----------
    1          2          3
    2          2          2
    3          4          5
    4          4          4
    5          6          7
    6          6          6
    7          7          7
    8          8          8
    9          9          9
    10-10-10

    10 selected lines.

    SQL > select * from xxc_table1 by 1.

    COL1 COL2 COL3 C
    ---------- ---------- ---------- -
    1          1          1 Y
    2          2          2 Y
    3          4          5 N
    4          4          4 Y
    5          5          5 Y
    6          7          8 N
    7          7          7 Y
    8          8          8 Y
    9          9          9 Y
    10-10-10

    10 selected lines.

    SQL >

  • IF IN FORALL AND BULK COLLECT

    Hi all
    I wrote a program... I have no doubt if I use if condition in FORALL INSERT or BULK COLLECT? I can't go for loop 'for '... Is it possible to INSERT FORALL RECUEILLENT validations in BULK as we do in the loop 'for '...

    create or replace
    Name of the PROCEDURE AS
    CURSOR CUR_name IS

    SELECT ancien_nom, NEW_name OF DIRECTORY_LISTING_AUDIT;

    TYPE V_OLD_name IS TABLE OF THE DIRECTORY_LISTING_AUDIT. TYPE % Ancien_nom;
    Z_V_OLD_name V_OLD_name;


    TYPE V_NEW_name IS TABLE OF THE DIRECTORY_LISTING_AUDIT. NEW_name % TYPE;
    Z_V_NEW_name V_NEW_name;

    BEGIN

    CUR_name OPEN;
    LOOP
    COLLECT the FETCH CUR_name in BULK IN Z_V_OLD_name, Z_V_NEW_name;

    IF Z_V_NEW_name <>NULL THEN
    Z_V_OLD_name: = Z_V_NEW_name;
    Z_V_NEW_name: = NULL;
    END IF;

    FORALL I IN Z_V_NEW_name. COUNTY


    INSERT INTO TEMP_DIREC_AUDIT (ancien_nom, NewName) VALUES (Z_V_OLD_name (I), Z_V_NEW_name (I));

    WHEN the OUTPUT CUR_name % NOTFOUND;
    END LOOP;
    CLOSE CUR_name;
    Name of the END;

    I think it's that it's good isn't?
    I assumed that there was a

    != 
    

    lack of

    IF Z_V_NEW_name != NULL THEN
    Z_V_OLD_name := Z_V_NEW_name ;
    Z_V_NEW_name := NULL;
    END IF;
    

    Who knows? I'm just guessing what lack us.

    In any case, this is why the original "something like" warning ;)

  • COLLECT in BULK and FORALL

    Hello Experts,

    Please review the below excerpt as it errors with a PLS-00103: Encoountered synbol «END».

    IF v_geospatial_coordinate_type = "None".
    THEN
    P_cursor OPEN;
    LOOP
    SEARCH p_cursor
    LOOSE COLLECTION v_search_results_basic LIMIT 100;
    BECAUSE me in 1.v_search_results_basic.count


    WHEN the OUTPUT v_search_results_basic.count < 100;
    END LOOP;

    CLOSE P_cursor;

    Thanks for the help and of course your professionalism.

    you missed "LOOP".

     IF v_geospatial_coordinate_type = 'None'
       THEN
       OPEN p_cursor;
       LOOP
           FETCH p_cursor
              BULK COLLECT INTO v_search_results_basic LIMIT 100;
           FOR I in 1..v_search_results_basic.count
           LOOP --<---
    
           EXIT WHEN v_search_results_basic.count < 100;
       END LOOP;
    
       CLOSE p_cursor;
    
  • Bulk collect with sequence Nextval

    Hello


    Oracle Database 10 g Enterprise Edition Release 10.2.0.4.0 - 64 bit


    Had a doubt about the collection in bulk with nextval sequence. You need to update a table with a sequence of Nextval.

    where should I place select below in the proc is this before or after the loop


    < font color = "red" > SELECT prop_id_s.nextval INTO v_prop_id FROM dual; < / make >
    CREATE OR REPLACE PROCEDURE  (state IN varchar2)
    AS
       
       CURSOR get_all
       IS
          SELECT                              /*+ parallel (A, 8) */
                A .ROWID
                from Loads A WHERE A.Prop_id IS NULL;
    
       TYPE b_ROWID
       IS
          TABLE OF ROWID
             INDEX BY BINARY_INTEGER;
    
       
       lns_rowid          b_ROWID;
    BEGIN
    
    
       OPEN Get_all;
    
       LOOP
          FETCH get_all BULK COLLECT INTO   lns_rowid LIMIT 10000;
    
          FORALL I IN 1 .. lns_rowid.COUNT
             UPDATE   loads a
                SET   a.prop_id= v_prop_id (I)
              WHERE   A.ROWID = lns_rowid (I) AND a.prop_id IS NULL;
    
          
          COMMIT;
          EXIT WHEN get_all%NOTFOUND;
       END LOOP;
    
       
    
       CLOSE Get_all;
    END;
    /
    Published by: 960736 on January 23, 2013 12:51

    Hello

    It depends on what results you want. All updated rows would take the same value, or should all get unique values?

    Whatever it is, you don't need the sliders and loop. Just a simple UPDATE statement.

    If each line requires a unique value of the sequence, then

    UPDATE  loads
    SET     prop_id     = prod_id_s.NEXTVAL
    WHERE     prop_id     IS NULL
    ;
    

    If all the lines that have a need to the same value null, then:

    SELECT     prod_id_s.nextval
    INTO     v_prop_id
    FROM     dual;
    
    UPDATE  loads
    SET     prop_id     = v_prop_id
    WHERE     prop_id     IS NULL
    ;
    

    Don't forget to declare v_prop_id as a NUMBER.

    I hope that answers your question.
    If not, post a small example of data (instructions CREATE and INSERT, only relevant columns) for all the tables and the involved sequences and also publish outcomes from these data.
    If you ask on a DML statement, such as UPDATE, the sample data will be the content of the or the tables before the DML, and the results will be the State of the or the tables changed when it's all over.
    Explain, using specific examples, how you get these results from these data.
    Always say what version of Oracle you are using (for example, 11.2.0.2.0).
    See the FAQ forum {message identifier: = 9360002}

  • Using bulk collect into with assistance from the limit to avoid the TEMP tablespace error run out?

    Hi all

    I want to know if using bulk collect into limit will help to avoid the TEMP tablespace error run out.

    We use Oracle 11 g R1.

    I am assigned to a task of creating journal facilitated for all tables in a query of the APEX.

    I create procedures to execute some sql statements to create a DEC (Create table select), and then fires on these tables.

    We have about three tables with more than 26 million records.

    It seems very well running until we reached a table with more than 15 million record, we got an error says that Miss tablespace TEMP.

    I googled on this topic and retrieve the tips:

    Use NO LOG

    Parallel use

    BULK COLLECT INTO limited

    However, the questions for those above usually short-term memory rather than running out of TEMPORARY tablespace.

    I'm just a junior developer and does not have dealed with table more than 10 million documents at a time like this before.

    The database support is outsourced. If we try to keep it as minimal contact with the DBA as possible. My Manager asked me to find a solution without asking the administrator to extend the TEMP tablespace.

    I wrote a few BULK COLLECT INTO to insert about 300,000 like once on the development environment. It seems.

    But the code works only against a 000 4000 table of records. I am trying to add more data into the Test table, but yet again, we lack the tablespace on DEV (this time, it's a step a TEMP data)

    I'll give it a go against the table of 26 million records on the Production of this weekend. I just want to know if it is worth trying.

    Thanks for reading this.

    Ann

    I really need check that you did not have the sizes of huge line (like several K by rank), they are not too bad at all, which is good!

    A good rule of thumb to maximize the amount of limit clause, is to see how much memory you can afford to consume in the PGA (to avoid the number of calls to the extraction and forall section and therefore the context switches) and adjust the limit to be as close to that amount as possible.

    Use the routines below to check at what threshold value would be better suited for your system because it depends on your memory allocation and CPU consumption.  Flexibility, based on your limits of PGA, as lines of length vary, but this method will get a good order of magnitude.

    CREATE OR REPLACE PROCEDURE show_pga_memory (context_in IN VARCHAR2 DEFAULT NULL)

    IS

    l_memory NUMBER;

    BEGIN

    SELECT st. VALUE

    IN l_memory

    SYS.v_$ session se, SYS.v_$ sesstat st, SYS.v_$ statname nm

    WHERE se.audsid = USERENV ('SESSIONID')

    AND st.statistic # nm.statistic = #.

    AND themselves. SID = st. SID

    AND nm.NAME = 'pga session in memory. "

    Dbms_output.put_line (CASE

    WHEN context_in IS NULL

    THEN NULL

    ELSE context_in | ' - '

    END

    || 'Used in the session PGA memory ='

    || To_char (l_memory)

    );

    END show_pga_memory;

    DECLARE

    PROCEDURE fetch_all_rows (limit_in IN PLS_INTEGER)

    IS

    CURSOR source_cur

    IS

    SELECT *.

    FROM YOUR_TABLE;

    TYPE source_aat IS TABLE OF source_cur % ROWTYPE

    INDEX BY PLS_INTEGER;

    l_source source_aat;

    l_start PLS_INTEGER;

    l_end PLS_INTEGER;

    BEGIN

    DBMS_SESSION.free_unused_user_memory;

    show_pga_memory (limit_in |) "- BEFORE"); "."

    l_start: = DBMS_UTILITY.get_cpu_time;

    OPEN source_cur.

    LOOP

    EXTRACTION source_cur

    LOOSE COLLECTION l_source LIMITED limit_in;

    WHEN l_source EXIT. COUNT = 0;

    END LOOP;

    CLOSE Source_cur;

    l_end: = DBMS_UTILITY.get_cpu_time;

    Dbms_output.put_line (' elapsed time CPU for limit of ')

    || limit_in

    || ' = '

    || To_char (l_end - l_start)

    );

    show_pga_memory (limit_in |) "- AFTER");

    END fetch_all_rows;

    BEGIN

    fetch_all_rows (20000);

    fetch_all_rows (40000);

    fetch_all_rows (60000);

    fetch_all_rows (80000);

    fetch_all_rows (100000);

    fetch_all_rows (150000);

    fetch_all_rows (250000);

    -etc.

    END;

  • Get 'not enough values error' in bulk collect

    I want to insert all the rows in the employees table in the tmp table which has the structure.

    Purpose: Try just feature fired block to create a return to the top of a table.

    Problem: My code is to not "enough of values" error please report if mistaken.

    structure of the employees table:

    SQL > desc employee;

    Name                                      Null?    Type

    ----------------------------------------- -------- ----------------------------

    EMPLOYEE_ID NOT NULL NUMBER (6)

    FIRST NAME VARCHAR2 (20)

    LAST_NAME NOT NULL VARCHAR2 (25)

    EMAIL NOT NULL VARCHAR2 (25)

    PHONE_NUMBER VARCHAR2 (20)

    HIRE_DATE NOT NULL DATE

    JOB_ID NOT NULL VARCHAR2 (10)

    SALARY NUMBER (8.2)

    COMMISSION_PCT NUMBER (2.2)

    MANAGER_ID NUMBER (6)

    DEPARTMENT_ID NUMBER 4

    tmp table structure:

    SQL > tmp desc;

    Name                                      Null?    Type

    ----------------------------------------- -------- ----------------------------

    EMPLOYE_ID NUMBER (6)

    FIRST NAME VARCHAR2 (20)

    LAST_NAME NOT NULL VARCHAR2 (25)

    EMAIL NOT NULL VARCHAR2 (25)

    PHONE_NUMBER VARCHAR2 (20)

    HIRE_DATE NOT NULL DATE

    JOB_ID NOT NULL VARCHAR2 (10)

    SALARY NUMBER (8.2)

    COMMISSION_PCT NUMBER (2.2)

    MANAGER_ID NUMBER (6)

    DEPARTMENT_ID NUMBER 4

    SQL > select * from tmp;

    no selected line

    Code:

    declare

    type rec is the employee table % rowtype

    index by pls_integer;

    a rec;

    Start

    Select * bulk collect in a

    employees;

    ForAll i in a.first... a.Last

    Insert into tmp values (a (i));

    end;

    /

    Result:

    SQL > declare

    2

    3 type rec is the employee table % rowtype

    4 index of pls_integer;

    5 a rec;

    6

    7. start

    8 remove tmp;

    9 select * bulk collect in a

    10 employees;

    11 ForAll i in a.first... a.Last

    12 insert into tmp values (a (i));

    13 end;

    14.

    Insert into tmp values (a (i));

    *

    ERROR on line 12:

    ORA-06550: line 12, column 13:

    PL/SQL: ORA-00947: not enough values

    ORA-06550: line 12, column 1:

    PL/SQL: SQL statement ignored

    Remove parentheses

    insert into tmp values a(i);
    

    or call the individual columns

    insert into tmp( employee_id, first_name, ... )
     values( a(i).employee_id, a(i).first_name, ... );
    

    Justin

  • Error using BULK collect with RECORD TYPE

    Hello

    I wrote a simple procedure to declare a record type & then by a variable of type NESTED table.

    I then selects the data using COLLECT in BULK & trying to access it via a LOOP... We get an ERROR.

    ------------------------------------------------------------------------------------------------------------------------------------------------------

    CREATE OR REPLACE PROCEDURE sp_test_bulkcollect
    IS

    TYPE rec_type () IS RENDERING
    emp_id VARCHAR2 (20).
    level_id NUMBER
    );

    TYPE v_rec_type IS TABLE OF THE rec_type;

    BEGIN

    SELECT employe_id, level_id
    LOOSE COLLECTION v_rec_type
    OF portfolio_exec_level_mapping
    WHERE portfolio_execp_id = 2851852;

    FOR indx IN v_rec_type. FIRST... v_rec_type. LAST
    LOOP

    dbms_output.put_line ('Emp-' | v_rec_type.emp_id (indx) |) » '|| v_rec_type.level_id (indx));

    END LOOP;

    END;
    -----------------------------------------------------------------------------------------------------------------------------------

    Here is the ERROR I get...


    -Errors of compilation for the PROCEDURE DOMRATBDTESTUSER. SP_TEST_BULKCOLLECT

    Error: PLS-00321: expression "V_REC_TYPE" is not appropriate for the left side of an assignment statement
    Online: 15
    Text: IN portfolio_exec_level_mapping

    Error: PL/SQL: ORA-00904: invalid identifier
    Online: 16
    Text: WHERE portfolio_execp_id = 2851852;

    Error: PL/SQL: statement ignored
    Line: 14
    Text: COLLECT LOOSE v_rec_type

    Error: PLS-00302: component 'FIRST' must be declared
    Online: 19
    Text: LOOP

    Error: PL/SQL: statement ignored
    Online: 19
    Text: LOOP
    ------------------------------------------------------------------------------------------------

    Help PLZ.

    and with a complete code example:

    SQL> CREATE OR REPLACE PROCEDURE sp_test_bulkcollect
      2  IS
      3  TYPE rec_type IS RECORD (
      4  emp_id VARCHAR2(20),
      5  level_id NUMBER
      6  );
      7  TYPE v_rec_type IS TABLE OF rec_type;
      8  v v_rec_type;
      9  BEGIN
     10     SELECT empno, sal
     11     BULK COLLECT INTO v
     12     FROM emp
     13     WHERE empno = 7876;
     14     FOR indx IN v.FIRST..v.LAST
     15     LOOP
     16        dbms_output.put_line('Emp -- '||v(indx).emp_id||' '||v(indx).level_id);
     17     END LOOP;
     18  END;
     19  /
    
    Procedure created.
    
    SQL>
    SQL> show error
    No errors.
    SQL>
    SQL> begin
      2     sp_test_bulkcollect;
      3  end;
      4  /
    Emp -- 7876 1100
    
    PL/SQL procedure successfully completed.
    
  • Bulk collect into the record type

    Sorry for the stupid question - I do something really simple wrong here, but can not understand. I want to choose a few rows from a table in a cursor, then in bulk it collect in a folder. I'll possibly extended the record to include additional fields that I will select return of functions, but I can't get this simple test case to run...

    PLS-00497 is the main error.

    Thanks in advance.
    create table test (
    id number primary key,
    val varchar2(20),
    something_else varchar2(20));
    
    insert into test (id, val,something_else) values (1,'test1','else');
    insert into test (id, val,something_else) values (2,'test2','else');
    insert into test (id, val,something_else) values (3,'test3','else');
    insert into test (id, val,something_else) values (4,'test4','else');
    
    commit;
    
    SQL> declare
      2   cursor test_cur is
      3   (select id, val
      4   from test);
      5
      6   type test_rt is record (
      7     id   test.id%type,
      8     val      test.val%type);
      9
     10   test_rec test_rt;
     11
     12  begin
     13    open test_cur;
     14    loop
     15      fetch test_cur bulk collect into test_rec limit 10;
     16       null;
     17     exit when test_rec.count = 0;
     18    end loop;
     19    close test_cur;
     20  end;
     21  /
        fetch test_cur bulk collect into test_rec limit 10;
                                         *
    ERROR at line 15:
    ORA-06550: line 15, column 38:
    PLS-00497: cannot mix between single row and multi-row (BULK) in INTO list
    ORA-06550: line 17, column 21:
    PLS-00302: component 'COUNT' must be declared
    ORA-06550: line 17, column 2:
    PL/SQL: Statement ignored

    You must declare an array based on your registration type.

    DECLARE
       CURSOR test_cur
       IS
             SELECT
                id,
                val
             FROM
                test
       ;
    type test_rt
    IS
       record
       (
          id test.id%type,
          val test.val%type);
       type test_rec_arr is table of test_rt index by pls_integer;
       test_rec test_rec_arr;
    BEGIN
       OPEN test_cur;
       LOOP
          FETCH
             test_cur bulk collect
          INTO
             test_rec limit 10;
          NULL;
          EXIT
       WHEN test_rec.count = 0;
       END LOOP;
       CLOSE test_cur;
    END;
     31  /
    
    PL/SQL procedure successfully completed.
    
    Elapsed: 00:00:00.06
    ME_XE?
    

    Notice that the difference is...

       type test_rec_arr is table of test_rt index by pls_integer;
       test_rec test_rec_arr;
    
  • bulk collect with limit

    code is: cursor here can fetch a large amount of data (22000 records). So I have to go with the limit clause. I use two foralls. This code snippet is correct with regard to the two foralls, exit and % bulk_rowcount sql statement? I have to use in production. Is there something that can break the code?

    OPEN c1;
    LOOP
    C1 FETCH BULK COLLECT WITHIN the limits of the 100 id_array;
    FORALL i IN id_array. FIRST... id_array. LAST
    Update statement;
    BECAUSE me IN id_array. FIRST... id_array. LAST
    LOOP
    v_cnt: = v_cnt + SQL % BULK_ROWCOUNT (i);
    END LOOP;
    FORALL i IN id_array. FIRST... id_array. LAST
    Insert statement;
    WHEN id_array EXIT. COUNT = 0;
    END LOOP;
    CLOSE c1;

    In addition, how context switching works when we use the llimit clause?
    without limit:
    collection is completely filled. then prepares to all the DML statements and go to sql and exeutes one by one.
    right?
    tasted clause say limit = 100
    100 indexes collection gets populated.then prepares all the DML statements for these 100 indices and go to the DTF sql and exeutes. And then back to pl/sql. fills the same 100 indices collection with 100 records and prepares the DML and goes to sql to run and returns to pl/sql and so on? Is this true?
    right?

    large volume of data (22000 records)

    This isn't a large volume of data.

    In addition, how context switching works when we use the llimit clause?

    You will find some good explanations here:

    http://www.Oracle.com/technology/tech/pl_sql/PDF/doing_sql_from_plsql.PDF

Maybe you are looking for