BULK collect for all

I do need to update the deposit_tbl who have around 1500000 records. every day approximately 1 million records are up to date.
I use the following code:

DECLARE

number of l_awaiting_status;
number of l_pending_status;
CURSOR s_cur IS
SELECT DT. DEP_REF_NUM
OF deposit_tbl dt
WHEN DT. STATUS_VALUE = l_awaiting_status
AND trunc (dt.settle_due_dt) < = trunc (sysdate);

TYPE fetch_array IS TABLE OF s_cur % rowTYPE;
s_array fetch_array;
BEGIN
l_awaiting_status: = 16;

l_pending_status: = 0;
OPEN s_cur;
LOOP
Fetch the s_cur COLLECT LOOSE s_array LIMIT 100000;

FORALL i IN 1.s_array. COUNTY
UPDATE deposit_tbl
SET status_value = l_pending_status,
update_by = "ST_SYSTEM."
update_dt = sysdate
WHERE DEP_REF_NUM = s_array (i);
EXIT WHEN s_cur % NOTFOUND;
END LOOP;
CLOSE S_cur;
END;


error: expression of the wrong data type for condition WHERE DEP_REF_NUM = s_array (i);

How can I avoid this problem and update the table in the recordset of 100000 at a time. NONMAL of loops is too expensive and parallel update is not possible.

Hello
Please replace WHERE DEP_REF_NUM = s_array (i); with WHERE DEP_REF_NUM = s_array (i). DEP_REF_NUM;.

Thank you
JULIEN

Published by: dev. On January 21, 2013 Oracle 06:32

Tags: Database

Similar Questions

  • Bulk collect treats all lines not containing the LIMIT clause.

    Hi all

    I was referring the Oracle Site for COLLECTION in BULK.

    http://www.Oracle.com/technology/oramag/Oracle/08-Mar/o28plsql.html

    In the following code, I found and I ran the same.

    I just want to know why motor Pl - SQL is not processing or recital 27 last lines when I use % NOTFOUND cursot attribute.


    PROCEDURE process_all_rows_foi_test (p_limit PLS_INTEGER DEFAULT 100)
    IS
       CURSOR c1
       IS
          SELECT *
            FROM all_objects
           WHERE ROWNUM <= 227;
    
       TYPE foi_rec IS TABLE OF c1%ROWTYPE
          INDEX BY PLS_INTEGER;
    
       v_foi_rec   foi_rec;
       v_number    NUMBER  := 1;
    BEGIN
       OPEN c1;
    
       LOOP
          FETCH c1
          BULK COLLECT INTO v_foi_rec LIMIT p_limit;
    
          EXIT WHEN v_foi_rec.COUNT = 0;--------EXIT WHEN c1%NOTFOUND;--->Here is the issue
          DBMS_OUTPUT.put_line (v_number);
          v_number := v_number + 1;
       END LOOP;
    
       CLOSE c1;
    END;
    Please guide me on this.

    Thank you
    Arun

    % NOTFOUND will have the value TRUE when it gets below the limit

    (it is documented)

    But your workaround works fine

    SQL> declare
      2     CURSOR c1
      3     IS
      4        SELECT *
      5          FROM all_objects
      6         WHERE ROWNUM <= 227;
      7
      8     TYPE foi_rec IS TABLE OF c1%ROWTYPE
      9        INDEX BY PLS_INTEGER;
     10
     11     v_foi_rec   foi_rec;
     12     v_number    NUMBER  := 1;
     13  BEGIN
     14     OPEN c1;
     15
     16     LOOP
     17        FETCH c1
     18        BULK COLLECT INTO v_foi_rec LIMIT 100; --p_limit;
     19
     20        EXIT WHEN v_foi_rec.COUNT = 0;--------EXIT WHEN c1%NOTFOUND;--->Here is the issue
     21        DBMS_OUTPUT.put_line (v_foi_rec.count);
     22        v_number := v_number + 1;
     23     END LOOP;
     24
     25     CLOSE c1;
     26  END;
     27  /
    100
    100
    27
    
    PL/SQL procedure successfully completed.
    
    SQL> 
    

    Another option would be to place the EXIT right before your END LOOP;

  • UNION operator with BULK COLLECT for one type of collection

    Hi all

    I created a table as given below:
    create or replace type coltest is table of number;

    Here are 3 blocks PL/SQL that populate the data in variables of the above mentioned type of table:


    BLOCK 1:

    DECLARE
    col1 coltest: = coltest (1, 2, 3, 4, 5, 11);
    col2 coltest: = coltest (6, 7, 8, 9, 10);
    COL3 coltest: = coltest();

    BEGIN

    SELECT * BULK COLLECT
    IN col1
    FROM (SELECT *)
    TABLE (CAST (coltest AS col1))
    UNION ALL
    SELECT * FROM TABLE (CAST (col2 AS coltest)));

    dbms_output.put_line ('col1');
    dbms_output.put_line ('col1.count: ' | col1.) (COUNT);

    BECAUSE me in 1... col1. COUNTY
    LOOP
    dbms_output.put_line (col1 (i));
    END LOOP;

    END;

    OUTPUT:
    col1
    col1. Count: 5
    6
    7
    8
    9
    10



    BLOCK 2:

    DECLARE
    col1 coltest: = coltest (1, 2, 3, 4, 5, 11);
    col2 coltest: = coltest (6, 7, 8, 9, 10);
    COL3 coltest: = coltest();

    BEGIN
    SELECT * BULK COLLECT
    IN col2
    FROM (SELECT *)
    TABLE (CAST (coltest AS col1))
    UNION ALL
    SELECT * FROM TABLE (CAST (col2 AS coltest)));

    dbms_output.put_line ('col2');
    dbms_output.put_line ('col2.count: ' | col2.) (COUNT);

    BECAUSE me in 1... col2. COUNTY
    LOOP
    dbms_output.put_line (col2 (i));
    END LOOP;
    END;

    OUTPUT:
    col2
    col2. Count: 6
    1
    2
    3
    4
    5
    11

    BLOCK 3:

    DECLARE
    col1 coltest: = coltest (1, 2, 3, 4, 5, 11);
    col2 coltest: = coltest (6, 7, 8, 9, 10);
    COL3 coltest: = coltest();

    BEGIN

    SELECT * BULK COLLECT
    IN col3
    FROM (SELECT *)
    TABLE (CAST (coltest AS col1))
    UNION ALL
    SELECT * FROM TABLE (CAST (col2 AS coltest)));

    dbms_output.put_line ('col3');
    dbms_output.put_line ('col3.count: ' | col3.) (COUNT);

    BECAUSE me in 1... Col3. COUNTY
    LOOP
    dbms_output.put_line (COL3 (i));
    END LOOP;
    END;


    OUTPUT:

    COL3
    Col3.Count: 11
    1
    2
    3
    4
    5
    11
    6
    7
    8
    9
    10

    Can someone explain please the output of the BLOCK 1 and 2? Why not in bulk collect in col1 and col2 11 return as County?

    If I remember correctly, the part INTO the query to initialize the collection in which it will collect the data, and you gather in the collections that you are querying, you end up deleting the data out of this collection until she is interrogated.

    Not really, wise trying to collect data in a collection that you are querying.

  • On bulk collect forall vs fusion simple statement

    I understand that a single DML statement is better that use bulk collect to have all the intermediary undertakes. My only concern is that if I load a large amount of data as a record 100 million records in a 800 million table with foreign keys and indexes and the session is killed, the cancellation may take some time which is not acceptable. Using bulk collect forall with interval of validations is slower than a declaration unique fusion straight, but in the case of dead session, the restore time will be not too bad and a reload of the not yet committed data will be not as bad. For the design of a load of recoverable data which may not be affected as badly, is in bulk collect + for all the right approach?

    So if I chunk it upward in 50 lines, the child table must be loaded to its matching when records the parent table loaded and validate them.

    ... and then create a procedure that takes care of the parent AND child data at the same time.

    SQL for DBMS_PARALLEL_EXECUTE would be:

    "start load_parent_and_child (: start_id,: end_id); end; »

    PS - you don't want to run ECD and DML PARALLEL at the same time...

    MK

  • Intervals of collection for the IC Agents

    Hi all

    Not sure how many of you are on 5,6 and use the cartridge of the Infrastructure. Does anyone know if it is possible to set different collection interval for different indicators? We are ok to collect measures of performance (CPU/memory/process, etc.) every 5 minutes, but we want to collect EventLog and Services every 1 min, we used to be able to do it with the old cartridge to OS, but with the new cartridge Infra, I couldn't quite understand how.

    Thank you

    Xiaoning

    It is not possible to do so today with a single agent. The HostAgent uses a calendar of unique collection for all the measures that it collects. You might approach creating two agents - a to collect metrics based on the frequency of 5 minutes by default and the other to collect only events and services according to a schedule of 1 minute. I filed an enhancement request to consider separate in a later collection times.

    Message edited by Stuart Hodgins to correct typos.

  • Bulk Collect and Millions of records.

    Hey guys,.

    I did experiences autour with big collect in GR 11, 2...

    I have millioms of files with very large tables.

    In fact, my question is this. How do you use bulk collect when you have millions of records?

    Everytime I try to use it for bulk collect into, I have run out of memory.

    So should I stick with the SQL engine when it comes to manipulate millions

    folders? Is maninly bulk collect for insert, updates to use for applications?

    Summer banging my head for awhile with it. Can a Pl/SQL pro if you please

    Give me some advice on this?

    In most cases SQL insert/update engine will end up more quickly then PL/SQL select + Insert/Update. Normally, you would use PL/SQL, if there is a complex logic that is based on calculations of several rows that can be easily made in SQL. If you must use BULK COLLECT many or / and wide lines, you can divide it into segments using LIMIT.

    SY.

  • Exception handlers in bulk collect and for all operations?

    Hello world

    My version of DB is

    BANNER

    ----------------------------------------------------------------

    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi

    PL/SQL Release 10.2.0.1.0 - Production

    CORE 10.2.0.1.0 Production

    AMT for Linux: Version 10.2.0.1.0 - Production

    NLSRTL Version 10.2.0.1.0 - Production

    My question is, what are the possible exception handlers can add us in a bulk collect and for all operations?

    When we use for all, we add except exception and sql % bulk_exceptions. But apart from that what can we add to bulk collect?

    Kind regards

    BS2012.

    Save stores Exception all the exceptions that occur during in bulk in a collection of treatment and at the end of the most much treatment raises an exception. The SQL % BULK_EXCEPTIONS collection has all exceptions. It's the right way to handle the exception during treatment in bulk. And that's all you need. Don't know what else await you.

  • Using the slider for and BULK COLLECT INTO

    Hi all
    in this case we prefer to use the cursor AND the cursor with the LOOSE COLLECTION? The following contains two block this same query where used FOR the slider, the other is using COLLECT LOOSE. The task that is running better given in the existing? How do we measure performance between these two?

    I use the example of HR schema:
    declare
    l_start number;
    BEGIN
    l_start:= DBMS_UTILITY.get_time;
    dbms_lock.sleep(1);
    FOR employee IN (SELECT e.last_name, j.job_title FROM employees e,jobs j 
    where e.job_id=j.job_id and  e.job_id LIKE '%CLERK%' AND e.manager_id > 120 ORDER BY e.last_name)
    LOOP
      DBMS_OUTPUT.PUT_LINE ('Name = ' || employee.last_name || ', Job = ' || employee.job_title);
    END LOOP;
    DBMS_OUTPUT.put_line('total time: ' || to_char(DBMS_UTILITY.get_time - l_start) || ' hsecs');
    END;
    /
     
    declare
    l_start number;
    type rec_type is table of varchar2(20);
    name_rec rec_type;
    job_rec rec_type;
    begin
    l_start:= DBMS_UTILITY.get_time;
    dbms_lock.sleep(1);
    SELECT e.last_name, j.job_title bulk collect into name_rec,job_rec FROM employees e,jobs j 
    where e.job_id=j.job_id and  e.job_id LIKE '%CLERK%' AND e.manager_id > 120 ORDER BY e.last_name;
    for j in name_rec.first..name_rec.last loop
      DBMS_OUTPUT.PUT_LINE ('Name = ' || name_rec(j) || ', Job = ' || job_rec(j));
    END LOOP;
    DBMS_OUTPUT.put_line('total time: ' || to_char(DBMS_UTILITY.get_time - l_start) || ' hsecs');
    end;
    /
    In this code, I put a timestamp in each block, but they are useless, since they both launched virtually instantaneous...

    Best regards
    Val

    (1) bulk fired fresh primary use is to reduce the change of context between sql and pl sql engine.
    (2), you should always use LIMIT when it comes with bulk collect, this does not increase the load on the PGA.
    (3) and the ideal number of BOUNDARY lines is 100.

    Also if you really want to compare performance improvements between the two different approaches to sql pl try to use the package of runstats tom Kyte

    http://asktom.Oracle.com/pls/Apex/asktom.download_file?p_file=6551378329289980701

  • Bulk Insert to help for all is slow

    HII All,
    I have two sql scripts. Having just insert statements and the other using bulk insert both do the same thing.

    (1) using the Bulk Insert
    Set serveroutput on;
    
    Declare
              type t_hgn_no is table of r_dummy_1%rowtype
              index by binary_integer;
              type t_flx_no is table of varchar2(4000)
              index by binary_integer;
              
              l_hgn_no t_hgn_no;
              l_flx_no t_flx_no;
    
              begin_time number;
              end_time   number;
    
              
    Begin
         select (dbms_utility.get_time) into begin_time from dual;
         dbms_output.put_line('started at : '||begin_time);
         
         
         With t as
         (
         Select '100004501' HOGAN_REFERENCE_NUMBER , '320IVLA092811011' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100014501' HOGAN_REFERENCE_NUMBER , '320IVLA092811010' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100024501' HOGAN_REFERENCE_NUMBER , '320IVLA092811009' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100034501' HOGAN_REFERENCE_NUMBER , '320IVLA092811008' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100044501' HOGAN_REFERENCE_NUMBER , '320IVLA092811007' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '10006501' HOGAN_REFERENCE_NUMBER , '140IGL2092811951' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100074501' HOGAN_REFERENCE_NUMBER , '320IVLA092811006' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '10007501' HOGAN_REFERENCE_NUMBER , '200IVLA092810617' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100084501' HOGAN_REFERENCE_NUMBER , '320SVLA092810002' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100094501' HOGAN_REFERENCE_NUMBER , '320IVLA092811005' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100104501' HOGAN_REFERENCE_NUMBER , '320IVLA092811004' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100114501' HOGAN_REFERENCE_NUMBER , '320IVLA092811003' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100124501' HOGAN_REFERENCE_NUMBER , '320IVLA092811002' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100134501' HOGAN_REFERENCE_NUMBER , '320IVLA092811001' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100144501' HOGAN_REFERENCE_NUMBER , '320SVLA092810001' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '10016501' HOGAN_REFERENCE_NUMBER , '140IGL2092811950' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '10017501' HOGAN_REFERENCE_NUMBER , '200IVLA092810616' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100217851' HOGAN_REFERENCE_NUMBER , '520USDL092818459' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '1002501' HOGAN_REFERENCE_NUMBER , '100PVL2092813320' FLEXCUBE_REFERENCE_NUMBER from dual 
              )
         Select HOGAN_REFERENCE_NUMBER,FLEXCUBE_REFERENCE_NUMBER
         bulk collect into l_hgn_no
         from t;
    
         forall i in 1..l_hgn_no.count
         
         Insert into r_dummy_1 values l_hgn_no(i);
    
         
    
    
    
    
    
    
         
         Commit;
         select (dbms_utility.get_time) into end_time from dual;
         dbms_output.put_line('ended at : '||end_time);
    
         
         
    Exception
              When others then
                   dbms_output.put_line('Exception : '||sqlerrm);
                   rollback;
    End;
    /
    Duration for bulk collect
    ==================


    SQL > @d:/bulk_insert.sql.
    starts at: 1084934013
    has completed at: 1084972317

    PL/SQL procedure successfully completed.

    SQL > select 1084972317-1084934013 double;

    1084972317 1084934013
    ---------------------
    38304




    (2) using the Insert statement
    Declare
              begin_time number;
              end_time   number;
    
    Begin
                        select (dbms_utility.get_time) into begin_time from dual;
                        dbms_output.put_line('started at : '||begin_time);
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('36501', '100CFL3092811385');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('106501', '100CFL3092811108');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('172501', '100SFL1092810013');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('192501', '100SVL2092814600');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('212501', '100SVL2092814181');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('272501', '100AFL309281B2LZ');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('292501', '100AVL2092812200');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('332501', '100SVL2092814599');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('346501', '100AFL309281B2LY');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('372501', '100SVL2092814598');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('382501', '100IVL1092811512');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('422501', '100SFL1092810020');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('432501', '100IVL1092811447');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('462501', '100CFL3092811107');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('492501', '100SVL2092814245');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('542501', '100AVL2092812530');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('592501', '100CFL3092811105');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('612501', '100SVL2092814242');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('632501', '100CFL3092811384');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('712501', '100PVL2092813321');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('722501', '100PVL2092813311');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('732501', '100PVL2092813341');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('742501', '100PVL2092813319');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('752501', '100PVL2092813308');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('762501', '100PVL2092813338');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('772501', '100PVL2092813316');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('782501', '100PVL2092813305');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('786501', '100CFL2092810051');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('792501', '100PVL2092813335');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('802501', '100PVL2092813313');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('812501', '100PVL2092813302');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('822501', '100PVL2092813332');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('832501', '100PVL2092813310');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('852501', '100PVL2092813329');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('862501', '100PVL2092813307');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('872501', '100PVL2092813299');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('882501', '100PVL2092813326');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('922501', '100PVL2092813304');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('932501', '100PVL2092813296');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('952501', '100PVL2092813300');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('962501', '100PVL2092813293');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('972501', '100PVL2092813323');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('982501', '100PVL2092813297');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1002501', '100PVL2092813320');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1012501', '100PVL2092813294');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1022501', '100PVL2092813290');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1032501', '100PVL2092813317');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1042501', '100PVL2092813291');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1052501', '100PVL2092813287');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1062501', '100PVL2092813315');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1072501', '100PVL2092813288');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1082501', '100AFL309281B2LX');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1092501', '100PVL2092813312');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1102501', '100PVL2092813285');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1112501', '100PVL2092813284');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1122501', '100PVL2092813309');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1142501', '100PVL2092813281');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1152501', '100PVL2092813306');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1162501', '100PVL2092813282');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1166501', '100CFL3092811383');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1212501', '100IVL1092811445');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1232501', '100IVL1092811526');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1272501', '100IVL1092811441');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1292501', '100IVL1092811523');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1302501', '100PVL2092813303');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1312501', '100PVL2092813279');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1322501', '100PVL2092813278');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1332501', '100PVL2092813301');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1342501', '100PVL2092813276');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1352501', '100PVL2092813275');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1376501', '100AFL309281B2LW');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1382501', '100PVL2092813272');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1392501', '100PVL2092813298');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1402501', '100PVL2092813273');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1412501', '100PVL2092813269');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1446501', '100RNF6092810019');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1452501', '100IVL1092811436');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1492501', '100CFL3092811382');
    
         
    
                        
                        select (dbms_utility.get_time) into end_time from dual;
                        dbms_output.put_line('ended at : '||end_time);
    Exception
              When Others Then
                         dbms_output.put_line('Exception Occured '||sqlerrm);
                         rollback;
    End;
    /
    duration for the insert script
    ====================
    SQL> @@d:/insert_bhanu.sql
    started at : 1084984928
    ended at : 1084988401
    
    PL/SQL procedure successfully completed.
    
    SQL> select 1084988401 - 1084984928 from dual;
    
    1084988401-1084984928
    ---------------------
                     3473
    I waz impossible of past all of the code... He has almost 13851 records. Please suggest me the best option, and if there is another way to achieve this.
    I need to provide a solution optimized to my clients.

    Concerning
    Rambeau.

    792353 wrote:

    I have two sql scripts. Having just insert statements and the other using bulk insert both do the same thing.

    Not really valid for the purposes of comparison.

    The fundamental question is what makes in bulk for faster processing. It is a well-known and easily answered--reduction in context switches between SQL and PL/SQL engines. And that's all. Nothing more and nothing magical.

    The easiest way to show this difference is to eliminate all other factors - especially I/O as a trial may be at a disadvantage by the physical i/o, while the comparison test can be promoted by e/s logic. Another factor that must be eliminated is extra unnecessary SQL that adds more overhead (such as the use of DOUBLE and unions) and so on.

    Remember that it is critical for the reference driver to compare like with like and eliminate all other factors. As simplistic as maybe keep it. For example, something like the following:

    SQL> create table foo_table( id number );
    
    Table created.
    
    SQL> var iterations number
    SQL> exec :iterations := 10000;
    
    PL/SQL procedure successfully completed.
    
    SQL>
    SQL> declare
      2          t1      timestamp with time zone;
      3  begin
      4          dbms_output.put_line( 'bench A: hard parsing, normal delete' );
      5          t1 := systimestamp;
      6          for i in 1..:iterations
      7          loop
      8                  execute immediate 'delete from foo_table where id = '||i;
      9          end loop;
     10          dbms_output.put_line( systimestamp - t1 );
     11  end;
     12  /
    bench A: hard parsing, normal delete
    +000000000 00:00:07.639779000
    
    PL/SQL procedure successfully completed.
    
    SQL>
    SQL> declare
      2          t1      timestamp with time zone;
      3  begin
      4          dbms_output.put_line( 'bench B: soft parsing, normal delete' );
      5          t1 := systimestamp;
      6          for i in 1..:iterations
      7          loop
      8                  delete from foo_table where id = i;
      9          end loop;
     10          dbms_output.put_line( systimestamp - t1 );
     11  end;
     12  /
    bench B: soft parsing, normal delete
    +000000000 00:00:00.268915000
    
    PL/SQL procedure successfully completed.
    
    SQL>
    SQL> declare
      2          type TNumbers is table of number;
      3          t1      timestamp with time zone;
      4          idList  TNumbers;
      5  begin
      6          dbms_output.put_line( 'bench C: soft parsing, bulk delete' );
      7          idList := new TNumbers();
      8          idList.Extend( :iterations );
      9
     10          for i in 1..:iterations
     11          loop
     12                  idList(i) := i;
     13          end loop;
     14
     15          t1 := systimestamp;
     16          forall i in 1..:iterations
     17                  delete from foo_table where id = idList(i);
     18          dbms_output.put_line( systimestamp - t1 );
     19  end;
     20  /
    bench C: soft parsing, bulk delete
    +000000000 00:00:00.061639000
    
    PL/SQL procedure successfully completed.
    
    SQL> 
    

    Why an empty table? Eliminates potential problems with the physical versus logical I/O. Why a delete and not Insertstatement? The same reason.

    The foregoing shows how slow hard analysis is clear. It has the advantage of soft analysis. It has the advantage of reducing the change of context, at the expense of additional memory - as bench 3 consumes much more memory (PGA) that other points of reference.

    Also note that these marks the exact same delete SQL statements - using different approaches. Where the approach only (and not the SQL) made the difference. This means that we can draw valid conclusions on each of these approaches.

    Can we say the same thing with the scripts you used to "show" that the treatment bulk is apparently slow?

  • In BULK COLLECT with loop for?

    Let's say I have a table with a column number and a date column.

    I also have a query using a number and a date as a parameter. I want to run this query for each row in the table (using the column date and number of lines as parameters). If my thinking is looping through the rows and the query parameters for each line. It becomes a large number of queries select. Is it possible to use bulk collect here? All the examples I've seen where he uses a predefined cursor. Also, I don't know how to fix this problem without a cursor.

    Here's the query I want to do for each line:
    select * from reading_values rv, 
        (select * from 
           (select id, datereading
                  from readings,
                  where datereading < p_date and pointid = p_id
                  order by datereading desc)
           where rownum <= 1) t
         where rv.reading_id = t.id

    After reading your initial statement 3 times I simply add a third table to select it. I call this tableX colb and two columns cola (number) (date).

    select rv.*, r.*, row_number() over (partition by r.pointid  order by r.datereading desc) rn
    from reading_values rv
    join readings r on rv.reading_id = r.id
    join tableX x on x.colA = r.pointid and  r.datereading < x.colB
    order by  r.pointid, datereading desc
    

    You can restrict which to return only one row for each data point by using the column rn I already added.

    select * /* probably need sinlge column names here to avoid duplications */
    from (
       select rv.*, r.*, row_number() over (partition by r.pointid  order by r.datereading desc) rn
       from reading_values rv
       join readings r on rv.reading_id = r.id
       join tableX x on x.colA = r.pointid and  r.datereading < x.colB
    )
    where rn = 1
    order by pointid, datereading desc
    

    Published by: Sven w. on June 23, 2010 16:47

  • Fired and for all costs in bulk

    Hi all

    I need to get 500000 lines of a table and apply logic to each line and insert the result into another table.
    Currently, I'm trying with COLLECT in BULK. To improve performance there is any other alternative...
    Can we improve as well as BULK collect. Please suggest with examples...

    Thank you and best regards,
    Sri

    user12023859 wrote:
    As suggested by you, I deleted all select them iteration in bulk and I could see the performance improvement. I always use PL/SQL to convert line values to the values of columns. I get my cursor to input data in the rows.

    You are still doing a nested SELECT statement (just now through explicit cursors).

    Extract a line of account, and then you open a cursor (SQL) for the balance and then treat that. You do exactly what does the SQL engine in a nested loops join. Only, your code can never be as fast and efficient as the SQL engine in this regard.

    Which begs the question: why manually join accounts whose balances using PL/SQL code? Why don't you use SQL to join the two tables instead?

    All of the PL/SQL code must be replaced by the following SQL statement:

    insert into balance_due
    select
       .. SQL logic using CASE statements...
    from cmf,
         balances,
         cost_centres
    where ..predicates..
    

    And then this code will be used in PL/SQL and followed by a validation. That's all. Simple, effective, fast and scalable code.

    You must optimize SQL and PL/SQL to minimize for performance. SQL is superior than PL/SQL, when it comes to crunch data. Use only PL/SQL to deal with flow control process, under condition of logical flow and so on - SQL allows to process the data.

    Pulling data from the SQL engine in the PL engine (despite that collect the essential) is still of the line treatment... treatment aka slow-by-slow.

    SQL is a language very powerful and able to process the data from the database. PL is not.

  • How do I create a smart (smart) collection of all photos taken at date of Christmas for all years (all of 24 December)?

    I like to create a smart collection of all photos taken from a specific date as December 24, for all years!

    It is very tedious to do that in Lightroom without a plug-in.

    I think that the plug-in filter everything from John R. Ellis can create this collection without penalty (I don't know if it creates a smart collection or a regular collection).

  • Need help LOCAL for ALL my apps from master collection CS6

    HELP for CS products went down hill every version from CS4. I just installed CS6 master collection and can't get help to work 90% of applications. It shouldn't be so difficult. I shouldn't lose hours searching so enless this damn user seas to try to get simple help for a product. Adobe is just trying to slide off it is supported of the responsibilities on the public.

    I want FULL LOCAL HELP CONTENT for ALL my CS6 master Collection Apps installed on MY machine. and when I click on 'Help' that is what I want. I don't want to go online to get help.

    How to do that and how 'LOCAL' by default for ANY help

    Joel

    Try this solution of two parts:

    Part i: the Adobe installer helps to use offline Help content (if exists) as a source rather than the configured online help

    1. start Adobe if installed Help.exe on your machine with CS6 products. I found that it installed on my machine Win7 to C:\Program Files (x 86) \Adobe\Adobe Help\Adobe Help.exe. It will open the Adobe help Manager preferences.

    2 Select 'Yes' radio button for 'General Settings' > ' display local help only content:

    Part II: Download and set up help offline (PDF) for your product from the adobe.com website.

    1. read the "help offline use in Adobe products | http://helpx.Adobe.com/x-productkb/global/offline-help.html KB download PDF for your CS6 or current CC version helps.

    2. After downloading the PDF locally follow the section "using the help within your Adobe PDF product from" put PDF files on your computer so that it works with Adobe products.

    As mentioned on the KB.

    • Windows 7
      C:\Users\Public\Documents\Adobe\PDF\pdf\
    • Windows 8
      C:\Users\Public\Public Documents\Adobe\PDF\pdf\
    • Mac OS
      / Users/Shared/Documents/Adobe/PDF/pdf /.

    Important: CS6 help PDF must be copied to the Adobe\PDF\pdf\cs6\ folder in order to work with the version of CS6 Adobe products.

  • Using bulk collect into with assistance from the limit to avoid the TEMP tablespace error run out?

    Hi all

    I want to know if using bulk collect into limit will help to avoid the TEMP tablespace error run out.

    We use Oracle 11 g R1.

    I am assigned to a task of creating journal facilitated for all tables in a query of the APEX.

    I create procedures to execute some sql statements to create a DEC (Create table select), and then fires on these tables.

    We have about three tables with more than 26 million records.

    It seems very well running until we reached a table with more than 15 million record, we got an error says that Miss tablespace TEMP.

    I googled on this topic and retrieve the tips:

    Use NO LOG

    Parallel use

    BULK COLLECT INTO limited

    However, the questions for those above usually short-term memory rather than running out of TEMPORARY tablespace.

    I'm just a junior developer and does not have dealed with table more than 10 million documents at a time like this before.

    The database support is outsourced. If we try to keep it as minimal contact with the DBA as possible. My Manager asked me to find a solution without asking the administrator to extend the TEMP tablespace.

    I wrote a few BULK COLLECT INTO to insert about 300,000 like once on the development environment. It seems.

    But the code works only against a 000 4000 table of records. I am trying to add more data into the Test table, but yet again, we lack the tablespace on DEV (this time, it's a step a TEMP data)

    I'll give it a go against the table of 26 million records on the Production of this weekend. I just want to know if it is worth trying.

    Thanks for reading this.

    Ann

    I really need check that you did not have the sizes of huge line (like several K by rank), they are not too bad at all, which is good!

    A good rule of thumb to maximize the amount of limit clause, is to see how much memory you can afford to consume in the PGA (to avoid the number of calls to the extraction and forall section and therefore the context switches) and adjust the limit to be as close to that amount as possible.

    Use the routines below to check at what threshold value would be better suited for your system because it depends on your memory allocation and CPU consumption.  Flexibility, based on your limits of PGA, as lines of length vary, but this method will get a good order of magnitude.

    CREATE OR REPLACE PROCEDURE show_pga_memory (context_in IN VARCHAR2 DEFAULT NULL)

    IS

    l_memory NUMBER;

    BEGIN

    SELECT st. VALUE

    IN l_memory

    SYS.v_$ session se, SYS.v_$ sesstat st, SYS.v_$ statname nm

    WHERE se.audsid = USERENV ('SESSIONID')

    AND st.statistic # nm.statistic = #.

    AND themselves. SID = st. SID

    AND nm.NAME = 'pga session in memory. "

    Dbms_output.put_line (CASE

    WHEN context_in IS NULL

    THEN NULL

    ELSE context_in | ' - '

    END

    || 'Used in the session PGA memory ='

    || To_char (l_memory)

    );

    END show_pga_memory;

    DECLARE

    PROCEDURE fetch_all_rows (limit_in IN PLS_INTEGER)

    IS

    CURSOR source_cur

    IS

    SELECT *.

    FROM YOUR_TABLE;

    TYPE source_aat IS TABLE OF source_cur % ROWTYPE

    INDEX BY PLS_INTEGER;

    l_source source_aat;

    l_start PLS_INTEGER;

    l_end PLS_INTEGER;

    BEGIN

    DBMS_SESSION.free_unused_user_memory;

    show_pga_memory (limit_in |) "- BEFORE"); "."

    l_start: = DBMS_UTILITY.get_cpu_time;

    OPEN source_cur.

    LOOP

    EXTRACTION source_cur

    LOOSE COLLECTION l_source LIMITED limit_in;

    WHEN l_source EXIT. COUNT = 0;

    END LOOP;

    CLOSE Source_cur;

    l_end: = DBMS_UTILITY.get_cpu_time;

    Dbms_output.put_line (' elapsed time CPU for limit of ')

    || limit_in

    || ' = '

    || To_char (l_end - l_start)

    );

    show_pga_memory (limit_in |) "- AFTER");

    END fetch_all_rows;

    BEGIN

    fetch_all_rows (20000);

    fetch_all_rows (40000);

    fetch_all_rows (60000);

    fetch_all_rows (80000);

    fetch_all_rows (100000);

    fetch_all_rows (150000);

    fetch_all_rows (250000);

    -etc.

    END;

  • Bulk collect using some different column insert a table

    Hi all

    I gave an example of emp table in my original 100 million table record are I need to change group (IE deptno) 10 to 20 copy the same records

    about these codes, I get exception

    ORA-06550: line 11, column 53:

    PLS-00386: concordance that is 'EMP_TAB' between FETCH cursor and IN variables of type

    can help me please for these

    declare

    type row_tab is table emp % rowtype

    INDEX BY PLS_INTEGER;

    emp_tab row_tab;

    cursor cur_emp is select ENAME, 20 DEPTNO, HIREDATE, emp COMM EMPNO;

    Start

    Open cur_emp;

    loop

    Cur_emp fetch bulk collect in the limit emp_tab 2000;

    ForAll i in 1.emp_tab. COUNTY

    Insert / * + APPEND * / into emp (EMPNO, ENAME, DEPTNO, HIREDATE, COMM)

    values (emp_tab (i). EMPNO, emp_tab (i). Ename, emp_tab (i). DEPTNO, emp_tab (i). HIREDATE, emp_tab (i). COMM.) ;

    When the output cur_emp % notfound;

    END LOOP;

    close cur_emp;

    end;

    /

    Thank you

    VSM

    I use the user define the type of record to over come error

    declare

    type emp_rt is record (empno, ename emp.ename%type,deptno NUMBER (2) emp.empno%type, hiredate emp.hiredate%type,comm emp.comm%type);

    TYPE row_type IS the emp_rt INDEX TABLE OF pls_integer;

    emp_tab row_type;

    cursor cur_emp is select empno, ename, 20 deptno, hiredate, comm from emp where deptno = 10;

    Start

    Open cur_emp;

    loop

    Cur_emp fetch bulk collect in the emp_tab limit 2;

    ForAll i in 1.emp_tab. COUNTY

    Insert / * + APPEND * / into EMP (EMPNO, ENAME, DEPTNO, HIREDATE, COMM)

    values (emp_tab (i). EMPNO, emp_tab (i). ENAME, emp_tab (i). DEPTNO, emp_tab (i). HIREDATE, emp_tab (i). COMM.)

    ;

    When the output cur_emp % notfound;

    END LOOP;

    close cur_emp;

    end;

    /

    records are successful inserted, I do not know is not the right approach for 100 million documents?

    Thank you

    VM

Maybe you are looking for

  • Hotmail link is missing

    I use Firefox 10. Recently, I spent from XP to Windows 7 clean install. When I used to open Firefox there's a black bar on top with links to maps, Gmail etc. Don't have it now. Also there was once a link to Hotmail on toolbars, it's not too. How can

  • my computer updated automatically, then my headphone jack and internal speaker stopped working

    1 HP PAVILION NOTEBOOK PC G6 2 Intel Core i5 - 2430 M CPU @ 2.40 GHz 2.40 GHz, 64-bit operating system 3 N/A 4 UPDATE SYSTEM BY HP MADE REGULAR, ONCE THE COMPUTER RESTARTED ITSELF AUTOMATICALLY, MY SOUND FOR HEADPHONES AND SPEAKERS INTERNAL HAD DISAP

  • Why are there so many duplicates in my music folder?

    I went in my iTunes Media/Music folder to find a song corrupt and noticed that most of my music has been duplicated once or several times.  There is the original song and the other with a '1' or '2' some time after the title.  That's happened?  I gue

  • I don't know the default resolution of the screen that changed through updates.

    I have a laptop Dell Inspiron 1545, which came with Windows Vista. The display was stretched when I upgraded my pc, making it all wide, out of proportion. I want to know what is the correct default setting? IV tried to adjust my screen resolution, ho

  • How do I know if my windows has been activated

    I installed my copy of windows 7.i have also received message "your windows has been activated successfully" when I was reviewing. Tell me what are the other ways to know that my windows is genuine and active? Another allows my fraudulantly product k