bulk collect into with limit

Hi all
I was reading [http://asktom.oracle.com/pls/apex/f?p=100:11:0:P11_QUESTION_ID:5918938803188] of Tom Kyte site on bulk collect within limits. The code uses the % notfound cursor to exit the recovery loop. What I do in this situation is using exists, method of table rather than cursor attribute.
create or replace procedure p1 is
type num_list_type is table of number index by pls_integer;
num_list num_list_type;

cursor c1 is select temp from test;

begin
open c1;
loop
  fetch c1 bulk collect into num_list limit 2;
  if num_list.exists(1)=false then
    exit;
  end if;
  for i in num_list.first..num_list.last 
  loop
    dbms_output.put_line(num_list(i));
  END LOOP;
end loop;

end;
Since when I do this:
exit wen c1%notfound  
It will close when the cursor retrieves only less than the limit, leaving a few rows. If the code works properly.
Question:
1. is the Exit statement properly, is there another way (I'm a little skeptical because I'm not using the cursor)?
2 How to decide on the size limit based on what we have in the hardware settings and oracle? All of the guidelines?
3 - is the best practice when it comes with the cursor and several lines that still use bulk collect into?

Best regards
Val

Valerie Debonair wrote:
Hi all
I was reading [http://asktom.oracle.com/pls/apex/f?p=100:11:0:P11_QUESTION_ID:5918938803188] of Tom Kyte site on bulk collect within limits. The code uses the % notfound cursor to exit the recovery loop. What I do in this situation is using exists, method of table rather than cursor attribute.

create or replace procedure p1 is
type num_list_type is table of number index by pls_integer;
num_list num_list_type;

cursor c1 is select temp from test;

begin
open c1;
loop
fetch c1 bulk collect into num_list limit 2;
if num_list.exists(1)=false then
exit;
end if;
for i in num_list.first..num_list.last
loop
dbms_output.put_line(num_list(i));
END LOOP;
end loop;

end;

Since when I do this:

exit wen c1%notfound  

It will close when the cursor retrieves only less than the limit, leaving a few rows. If the code works properly.
Question:
1. is the Exit statement properly, is there another way (I'm a little skeptical because I'm not using the cursor)?
2 How to decide on the size limit based on what we have in the hardware settings and oracle? All of the guidelines?
3 - is the best practice when it comes with the cursor and several lines that still use bulk collect into?

Best regards
Val

Hello

1. Yes, in above code statement EXIT will work correctly. The other way is % NOTFOUND CURSOR attribute usage.
2. There is no precise way to decide the size limit.
3 depends on number of records. If you have a lot of records always use LIMIT that improves performance.

After reading the link that was posted by Blu, I am to edit this post.

1. do not use CURSOR attribute when you use collections within a cursor. Use methods of collection such as COUNT or EXIST etc.

@Blu,

Thanks much for the link.

Thank you
Suri

Published by: Suri on January 26, 2012 20:38

Tags: Database

Similar Questions

  • Using bulk collect into with assistance from the limit to avoid the TEMP tablespace error run out?

    Hi all

    I want to know if using bulk collect into limit will help to avoid the TEMP tablespace error run out.

    We use Oracle 11 g R1.

    I am assigned to a task of creating journal facilitated for all tables in a query of the APEX.

    I create procedures to execute some sql statements to create a DEC (Create table select), and then fires on these tables.

    We have about three tables with more than 26 million records.

    It seems very well running until we reached a table with more than 15 million record, we got an error says that Miss tablespace TEMP.

    I googled on this topic and retrieve the tips:

    Use NO LOG

    Parallel use

    BULK COLLECT INTO limited

    However, the questions for those above usually short-term memory rather than running out of TEMPORARY tablespace.

    I'm just a junior developer and does not have dealed with table more than 10 million documents at a time like this before.

    The database support is outsourced. If we try to keep it as minimal contact with the DBA as possible. My Manager asked me to find a solution without asking the administrator to extend the TEMP tablespace.

    I wrote a few BULK COLLECT INTO to insert about 300,000 like once on the development environment. It seems.

    But the code works only against a 000 4000 table of records. I am trying to add more data into the Test table, but yet again, we lack the tablespace on DEV (this time, it's a step a TEMP data)

    I'll give it a go against the table of 26 million records on the Production of this weekend. I just want to know if it is worth trying.

    Thanks for reading this.

    Ann

    I really need check that you did not have the sizes of huge line (like several K by rank), they are not too bad at all, which is good!

    A good rule of thumb to maximize the amount of limit clause, is to see how much memory you can afford to consume in the PGA (to avoid the number of calls to the extraction and forall section and therefore the context switches) and adjust the limit to be as close to that amount as possible.

    Use the routines below to check at what threshold value would be better suited for your system because it depends on your memory allocation and CPU consumption.  Flexibility, based on your limits of PGA, as lines of length vary, but this method will get a good order of magnitude.

    CREATE OR REPLACE PROCEDURE show_pga_memory (context_in IN VARCHAR2 DEFAULT NULL)

    IS

    l_memory NUMBER;

    BEGIN

    SELECT st. VALUE

    IN l_memory

    SYS.v_$ session se, SYS.v_$ sesstat st, SYS.v_$ statname nm

    WHERE se.audsid = USERENV ('SESSIONID')

    AND st.statistic # nm.statistic = #.

    AND themselves. SID = st. SID

    AND nm.NAME = 'pga session in memory. "

    Dbms_output.put_line (CASE

    WHEN context_in IS NULL

    THEN NULL

    ELSE context_in | ' - '

    END

    || 'Used in the session PGA memory ='

    || To_char (l_memory)

    );

    END show_pga_memory;

    DECLARE

    PROCEDURE fetch_all_rows (limit_in IN PLS_INTEGER)

    IS

    CURSOR source_cur

    IS

    SELECT *.

    FROM YOUR_TABLE;

    TYPE source_aat IS TABLE OF source_cur % ROWTYPE

    INDEX BY PLS_INTEGER;

    l_source source_aat;

    l_start PLS_INTEGER;

    l_end PLS_INTEGER;

    BEGIN

    DBMS_SESSION.free_unused_user_memory;

    show_pga_memory (limit_in |) "- BEFORE"); "."

    l_start: = DBMS_UTILITY.get_cpu_time;

    OPEN source_cur.

    LOOP

    EXTRACTION source_cur

    LOOSE COLLECTION l_source LIMITED limit_in;

    WHEN l_source EXIT. COUNT = 0;

    END LOOP;

    CLOSE Source_cur;

    l_end: = DBMS_UTILITY.get_cpu_time;

    Dbms_output.put_line (' elapsed time CPU for limit of ')

    || limit_in

    || ' = '

    || To_char (l_end - l_start)

    );

    show_pga_memory (limit_in |) "- AFTER");

    END fetch_all_rows;

    BEGIN

    fetch_all_rows (20000);

    fetch_all_rows (40000);

    fetch_all_rows (60000);

    fetch_all_rows (80000);

    fetch_all_rows (100000);

    fetch_all_rows (150000);

    fetch_all_rows (250000);

    -etc.

    END;

  • Bulk collect into the record type

    Sorry for the stupid question - I do something really simple wrong here, but can not understand. I want to choose a few rows from a table in a cursor, then in bulk it collect in a folder. I'll possibly extended the record to include additional fields that I will select return of functions, but I can't get this simple test case to run...

    PLS-00497 is the main error.

    Thanks in advance.
    create table test (
    id number primary key,
    val varchar2(20),
    something_else varchar2(20));
    
    insert into test (id, val,something_else) values (1,'test1','else');
    insert into test (id, val,something_else) values (2,'test2','else');
    insert into test (id, val,something_else) values (3,'test3','else');
    insert into test (id, val,something_else) values (4,'test4','else');
    
    commit;
    
    SQL> declare
      2   cursor test_cur is
      3   (select id, val
      4   from test);
      5
      6   type test_rt is record (
      7     id   test.id%type,
      8     val      test.val%type);
      9
     10   test_rec test_rt;
     11
     12  begin
     13    open test_cur;
     14    loop
     15      fetch test_cur bulk collect into test_rec limit 10;
     16       null;
     17     exit when test_rec.count = 0;
     18    end loop;
     19    close test_cur;
     20  end;
     21  /
        fetch test_cur bulk collect into test_rec limit 10;
                                         *
    ERROR at line 15:
    ORA-06550: line 15, column 38:
    PLS-00497: cannot mix between single row and multi-row (BULK) in INTO list
    ORA-06550: line 17, column 21:
    PLS-00302: component 'COUNT' must be declared
    ORA-06550: line 17, column 2:
    PL/SQL: Statement ignored

    You must declare an array based on your registration type.

    DECLARE
       CURSOR test_cur
       IS
             SELECT
                id,
                val
             FROM
                test
       ;
    type test_rt
    IS
       record
       (
          id test.id%type,
          val test.val%type);
       type test_rec_arr is table of test_rt index by pls_integer;
       test_rec test_rec_arr;
    BEGIN
       OPEN test_cur;
       LOOP
          FETCH
             test_cur bulk collect
          INTO
             test_rec limit 10;
          NULL;
          EXIT
       WHEN test_rec.count = 0;
       END LOOP;
       CLOSE test_cur;
    END;
     31  /
    
    PL/SQL procedure successfully completed.
    
    Elapsed: 00:00:00.06
    ME_XE?
    

    Notice that the difference is...

       type test_rec_arr is table of test_rt index by pls_integer;
       test_rec test_rec_arr;
    
  • Bulk collect into a Table nested with Extend

    Hi all
    I want to get all the columns of the table emp and dept. So I use bulk collect into the concept of nested table.

    *) I wrote the function in three different ways. EX: 1 and 2 (DM_NESTTAB_BULKCOLLECT_1 & DM_NESTTAB_BULKCOLLECT_2) does not give the desired result.
    *) It only gives the columns of the EMP table. That means it takes DEPT & columns of the EMP table, but it only gives columns of table EMP.
    ) I think, there is something problem with nested table Extend.
    ) I want to know infested.
    Can we use bulk collect into a table nested with extend?
    If it is yes then fix the below codes (EX: 1 & EX: 2) and can you explain me please?


    Codes are given below *.

    CREATE OR REPLACE TYPE NEST_TAB IS TABLE OF THE VARCHAR2 (1000);

    EX: 1:
    ----
    -Bulk collect into a Table nested with Extend-
    CREATE or replace FUNCTION DM_NESTTAB_BULKCOLLECT_1
    RETURN NEST_TAB
    AS
    l_nesttab NEST_TAB: = NEST_TAB();
    BEGIN
    FOR tab_rec IN (SELECT table_name
    From user_tables
    WHERE table_name IN ('EMP', 'Department')) LOOP
    l_nesttab.extend;

    SELECT column_name
    bulk collect INTO l_nesttab
    Of user_tab_columns
    WHERE table_name = tab_rec.table_name
    ORDER BY column_id;
    END LOOP;

    RETURN l_nesttab;
    EXCEPTION
    WHILE OTHERS THEN
    LIFT;
    END DM_NESTTAB_BULKCOLLECT_1;

    SELECT *.
    TABLE (DM_NESTTAB_BULKCOLLECT_1);

    OUTPUT:
    -------
    EMPNO
    ENAME
    JOB
    MGR
    HIREDATE
    SAL
    COMM
    DEPTNO

    * Only the EMP table columns are there in the nested table.
    -----------------------------------------------------------------------------------------------------

    EX: 2:
    -----
    -Bulk collect in the nested with Extend based on County - Table
    CREATE or replace FUNCTION DM_NESTTAB_BULKCOLLECT_2
    RETURN NEST_TAB
    AS
    l_nesttab NEST_TAB: = NEST_TAB();
    v_col_cnt NUMBER;
    BEGIN
    FOR tab_rec IN (SELECT table_name
    From user_tables
    WHERE table_name IN ('EMP', 'Department')) LOOP
    SELECT MAX (column_id)
    IN v_col_cnt
    Of user_tab_columns
    WHERE table_name = tab_rec.table_name;

    l_nesttab. Extend (v_col_cnt);

    SELECT column_name
    bulk collect INTO l_nesttab
    Of user_tab_columns
    WHERE table_name = tab_rec.table_name
    ORDER BY column_id;
    END LOOP;

    RETURN l_nesttab;
    EXCEPTION
    WHILE OTHERS THEN
    LIFT;
    END DM_NESTTAB_BULKCOLLECT_2;

    SELECT *.
    TABLE (DM_NESTTAB_BULKCOLLECT_2);

    OUTPUT:
    -------
    EMPNO
    ENAME
    JOB
    MGR
    HIREDATE
    SAL
    COMM
    DEPTNO

    * Only the EMP table columns are there in the nested table.
    -------------------------------------------------------------------------------------------

    EX: 3:
    -----

    -Collect in bulk in a nested Table to expand aid for loop.
    CREATE or replace FUNCTION DM_NESTTAB_BULKCOLLECT_3
    RETURN NEST_TAB
    AS
    l_nesttab NEST_TAB: = NEST_TAB();
    TYPE local_nest_tab
    THE VARCHAR2 ARRAY (1000);
    l_localnesttab LOCAL_NEST_TAB: = LOCAL_NEST_TAB();
    NUMBER x: = 1;
    BEGIN
    FOR tab_rec IN (SELECT table_name
    From user_tables
    WHERE table_name IN ('EMP', 'Department')) LOOP
    SELECT column_name
    bulk collect INTO l_localnesttab
    Of user_tab_columns
    WHERE table_name = tab_rec.table_name
    ORDER BY column_id;

    BECAUSE me IN 1.l_localnesttab. COUNTING LOOP
    l_nesttab.extend;

    L_NESTTAB (x): = L_LOCALNESTTAB (i);

    x: = x + 1;
    END LOOP;
    END LOOP;

    RETURN l_nesttab;
    EXCEPTION
    WHILE OTHERS THEN
    LIFT;
    END DM_NESTTAB_BULKCOLLECT_3;

    SELECT *.
    TABLE (DM_NESTTAB_BULKCOLLECT_3);

    OUTPUT:
    ------
    DEPTNO
    DNAME
    LOC
    EMPNO
    ENAME
    JOB
    MGR
    HIREDATE
    SAL
    COMM
    DEPTNO

    * Now, I got the desired result set. DEP. and columns of the Emp Table are in the nested Table.




    Thank you
    Ann

    COLLECTION BULK cannot add values to an existing collection. It can only crush.

  • Using the slider for and BULK COLLECT INTO

    Hi all
    in this case we prefer to use the cursor AND the cursor with the LOOSE COLLECTION? The following contains two block this same query where used FOR the slider, the other is using COLLECT LOOSE. The task that is running better given in the existing? How do we measure performance between these two?

    I use the example of HR schema:
    declare
    l_start number;
    BEGIN
    l_start:= DBMS_UTILITY.get_time;
    dbms_lock.sleep(1);
    FOR employee IN (SELECT e.last_name, j.job_title FROM employees e,jobs j 
    where e.job_id=j.job_id and  e.job_id LIKE '%CLERK%' AND e.manager_id > 120 ORDER BY e.last_name)
    LOOP
      DBMS_OUTPUT.PUT_LINE ('Name = ' || employee.last_name || ', Job = ' || employee.job_title);
    END LOOP;
    DBMS_OUTPUT.put_line('total time: ' || to_char(DBMS_UTILITY.get_time - l_start) || ' hsecs');
    END;
    /
     
    declare
    l_start number;
    type rec_type is table of varchar2(20);
    name_rec rec_type;
    job_rec rec_type;
    begin
    l_start:= DBMS_UTILITY.get_time;
    dbms_lock.sleep(1);
    SELECT e.last_name, j.job_title bulk collect into name_rec,job_rec FROM employees e,jobs j 
    where e.job_id=j.job_id and  e.job_id LIKE '%CLERK%' AND e.manager_id > 120 ORDER BY e.last_name;
    for j in name_rec.first..name_rec.last loop
      DBMS_OUTPUT.PUT_LINE ('Name = ' || name_rec(j) || ', Job = ' || job_rec(j));
    END LOOP;
    DBMS_OUTPUT.put_line('total time: ' || to_char(DBMS_UTILITY.get_time - l_start) || ' hsecs');
    end;
    /
    In this code, I put a timestamp in each block, but they are useless, since they both launched virtually instantaneous...

    Best regards
    Val

    (1) bulk fired fresh primary use is to reduce the change of context between sql and pl sql engine.
    (2), you should always use LIMIT when it comes with bulk collect, this does not increase the load on the PGA.
    (3) and the ideal number of BOUNDARY lines is 100.

    Also if you really want to compare performance improvements between the two different approaches to sql pl try to use the package of runstats tom Kyte

    http://asktom.Oracle.com/pls/Apex/asktom.download_file?p_file=6551378329289980701

  • bulk collect into the collection of objects

    create or replace type typ_obj as an object
    (l_x number (10),)
    l_y varchar2 (10),
    LZ varchar2 (10)
    );

    Create the type typ_obj_tt is table of the typ_obj;

    / / DESC insert_table;
    c_x number (10)
    C_P number (10)

    temp2_table / / DESC
    doJir number (10)
    c_y varchar2 (10)
    c_z varchar2 (10)

    procedure prc_x (p_obj_out ON typ_obj_tt)
    is
    cursor c1
    is
    Select t1.c_x,
    T2.c_y,
    T2.c_z
    Of
    insert_table t1,
    temp2_table t2
    where
    ....
    ;
    Start
    Open c1;
    collect the fetch c1 into loose in p_obj_out;
    Close c1;

    end;

    raises the error

    can you tell
    How to enter data in this object of type of output table

    Thanks in advance... any help will be much appreciated...

    PL do not spam the forums with topics in double - bulk collect into the collection of objects

  • bulk collect into after insertion?

    The INSET below works great!
        insert into GL_Interface_Control_temp
                                       (Set_Of_Books_ID
                                       ,Group_ID
                                       ,JE_Source_Name
                                       ,Interface_Run_ID)
                               values  (
                                        '2'
                                        ,10
                                        ,'TEST_SOURCE'
                                        ,'2'
                                        )
      returning ty_GLI (Set_Of_Books_ID
                       ,Group_ID
                       ,JE_Source_Name
                       ,Interface_Run_ID
                       ,0
                       ,0
                       ,null)
      bulk collect into l_GLI_Tab;
    BUT when combined with an INSERT... SELECT, as follows:
      insert into GL_Interface_Control_temp
                                       (Set_Of_Books_ID
                                       ,Group_ID
                                       ,JE_Source_Name
                                       ,Interface_Run_ID)
                              (select   gi.Set_Of_Books_ID
                                       ,GL_Interface_Control_S.nextval
                                       ,gi.JE_Source_Name
                                       ,GL_Journal_Import_S.nextval
                                from    GL_Interface_temp gi
                                where   gi.Currency_Conversion_Date <= l_Max_Rate_Date
                                and     gi.Set_Of_Books_ID  = v_Set_Of_Books_ID)
      returning ty_GLI (Set_Of_Books_ID
                       ,Group_ID
                       ,JE_Source_Name
                       ,Interface_Run_ID
                       ,0
                       ,0
                       ,null)
      bulk collect into l_GLI_Tab;
    I get the following error:
      *
    ERROR at line 15:
    ORA-00933: SQL command not properly ended
    No idea why?
    Can we not use BULK COLLECT BACK IN with an INSERT SELECT?

    P;

    I'm sorry to tell you that the return clause can only be used with the values clause.

    http://download.Oracle.com/docs/CD/E11882_01/server.112/e10592/statements_9014.htm#i2121671

    Alessandro Bye

  • Select Insert and bulk collect into exadata

    Hi all

    We work in Oracle Exadata. I am new to oracle exacta.  We need to insert some 7.5 million records in our table. My manager who has worked at exadata asked me to insert, select to load the data of 7.5 million records from one table to another. He asked me to forget fired fresh concepts in bulk. I read exadata prefer basic set rank of basic techniques. Is in bulk collect not a technical set base? Select Insert or in bulk which is a better technique, gather to load records from one table to the other in oracle exadata? Please advise me on this issue

    Mantra of Tom apply for Exadata and, in its follow-up here:

    Ask Tom & quot; DML only insert & #x2F; Select or bulk collect... & quot;

  • Bulk collect into the statement

    Hi all

    I'm trying to copy data using the database link.

    I'm using the Oracle 11 g on Windows.

    I check the data in the source table and it exists. When I run under block it is running successfully but do return no data in the target database.

    SET SERVEROUTPUT ON

    DECLARE

    TYPE of t_bulk_collect_test_1 IS the TABLE OF NT_PROP % ROWTYPE;

    l_tab1 t_bulk_collect_test_1;

    CURSOR c_data1 IS

    SELECT *.

    OF NT_PROP@dp_copy;

    BEGIN

    OPEN c_data1.

    LOOP

    EXTRACTION c_data1

    LOOSE COLLECTION l_tab1 LIMIT 10000;

    commit;

    WHEN OUTPUT l_tab1.count = 0;

    END LOOP;

    CLOSE C_data1;

    END;

    /

    Could you get it someone please let me know what is the error in this code.

    Thanks in advance

    Bulk operation will not improve performance. Your code is a good example to show how to write code that is a performance nightmare. I'm sorry to say this, but it's the truth.

    Static SQL is the fastest way to do it. As you will transfer the data via a DB connection it will be certain overhead. But the block is not the solution for it. Make a simple INSERT INTO... SELECT... Beh It's the best way to do what you want.

  • Clever bulk collect work with 9.2 and snaps.

    Hello
    I do shots of v$ sysstat with 1 delay seconds to count things like this:
    select name,value from v$sysstat
    where
    name in ('execute count', 'parse count (hard)', 'parse count (total)',
                              'physical read total IO requests', 'physical read total bytes',
                              'physical write total IO requests', 'physical write total bytes',
                              'redo size', 'redo writes', 'session cursor cache hits',
                              'session logical reads', 'user calls', 'user commits') ;
    dbms_lock.sleep(1);
    
    select name,value from v$sysstat
    where
    name in ('execute count', 'parse count (hard)', 'parse count (total)',
                              'physical read total IO requests', 'physical read total bytes',
                              'physical write total IO requests', 'physical write total bytes',
                              'redo size', 'redo writes', 'session cursor cache hits',
                              'session logical reads', 'user calls', 'user commits') ;
    But Im really interested in the deltas, so think, perhaps in bulk to collect is the obvious solution.
    I can work around that, with the achievement of pivot and self, sign up to get only 1 line per request and then just do the math.
    Wondering how to build the collection in support of my application?
    Code should work on 9.2 and above.
    Concerning
    GregG

    GregG says:
    OK, but how do you know which session should be monitored?
    You use over connection therefor triggers and trigger disconnect to end sampling?
    Looks like an overhead projector for me.

    As Padders said, it must be activated when necessary. I've never used triggers still allow - but it can be done via connect/disconnect session if necessary triggers.

    Keep in mind that the SQL Statistics is not always one. Often, you need to enable these manually, according to the database by default, the version, and requirements. For example

    alter session set statistics_level = all
    alter session set timed_statistics = true
    alter session set timed_os_statistics = 5
    

    Alternatively, you can manually set the trace level.

    Point being is that you have a package with the following basic interface:

    create or replace package FooStats as
      --// instruct stats/tracing to be enabled and base snapshot
      --// of current stats to be taken (stored in GTTs)
      procedure StartSnapshot( processName varchar2, .. );
    
      --// instructs the GTT base data to be updated with the delta between
      --// the base data and current stats data
      procedure EndSnapshot( .. );
    
    end;
    

    .. turn stats recording on and outside, quite easy. It can prepare the database session to stats followed, set trace levels, issue commands alter session and so on.

    It allows for example to define a normal application with global Boolean static process which can be activated to indicate that the module app to instrument himself by saving his stats processing, using your custom statistics package.

    However, you choose to do it well, the basics of modularization and re-use remain key.

  • How to use in bulk collect into clause

    Hi all
    I need like this, I want to change the query by transmitting the names of the tables in oracle database 10 g 2.

    as if I use first of all, I spend select it by name of table scott.emp * from scott.emp;

    so I want to spend scott.dept table name select * from scott.dept;
    using select * option in the select list.

    How can I run it.

    Give me a solution.

    Please answer...

    Execute Immediate is in fact an option, that you can not use it because you have a variable select list.

    You can use DBMS_SQL or REF CURSOR. The Ref Cursor example is given below:

    var oput_cur refcursor
    var tabname varchar2(30)
    
    exec :tab_name := 'dual';
    
    begin
      open :oput_cur for 'select * from ' || :tabname;
    end;
    /
    
    print oput_cur
    
  • Doubt on bulk collect with LIMIT

    Hello

    I have a doubt on in bulk to collect, when did Commit

    I have an example in PSOUG
    http://psoug.org/reference/array_processing.html
    CREATE TABLE servers2 AS
    SELECT *
    FROM servers
    WHERE 1=2;
    
    DECLARE
     CURSOR s_cur IS
     SELECT *
     FROM servers;
    
     TYPE fetch_array IS TABLE OF s_cur%ROWTYPE;
     s_array fetch_array;
    BEGIN
      OPEN s_cur;
      LOOP
        FETCH s_cur BULK COLLECT INTO s_array LIMIT 1000;
    
        FORALL i IN 1..s_array.COUNT
        INSERT INTO servers2 VALUES s_array(i);
    
        EXIT WHEN s_cur%NOTFOUND;
      END LOOP;
      CLOSE s_cur;
      COMMIT;
    END;
    If my table servers were 3 000 000 files, when do we commit? When I insert all records?
    could crash redo log?
    using 9.2.08

    muttleychess wrote:
    If my table servers were 3 000 000 files, when do we commit?

    Commit point has nothing to do with how many rows treat you. It's purely commercial leads. Your code implements a commercial operation, right? So if you're getting into before any trancaction (from the commercial point of view) other sessions will be already see changes that are (from a business point of view) incomplete. In addition, what happens if rest of trancaction (from the commercial point of view) goes down?

    SY.

  • Bulk collect with sequence Nextval

    Hello


    Oracle Database 10 g Enterprise Edition Release 10.2.0.4.0 - 64 bit


    Had a doubt about the collection in bulk with nextval sequence. You need to update a table with a sequence of Nextval.

    where should I place select below in the proc is this before or after the loop


    < font color = "red" > SELECT prop_id_s.nextval INTO v_prop_id FROM dual; < / make >
    CREATE OR REPLACE PROCEDURE  (state IN varchar2)
    AS
       
       CURSOR get_all
       IS
          SELECT                              /*+ parallel (A, 8) */
                A .ROWID
                from Loads A WHERE A.Prop_id IS NULL;
    
       TYPE b_ROWID
       IS
          TABLE OF ROWID
             INDEX BY BINARY_INTEGER;
    
       
       lns_rowid          b_ROWID;
    BEGIN
    
    
       OPEN Get_all;
    
       LOOP
          FETCH get_all BULK COLLECT INTO   lns_rowid LIMIT 10000;
    
          FORALL I IN 1 .. lns_rowid.COUNT
             UPDATE   loads a
                SET   a.prop_id= v_prop_id (I)
              WHERE   A.ROWID = lns_rowid (I) AND a.prop_id IS NULL;
    
          
          COMMIT;
          EXIT WHEN get_all%NOTFOUND;
       END LOOP;
    
       
    
       CLOSE Get_all;
    END;
    /
    Published by: 960736 on January 23, 2013 12:51

    Hello

    It depends on what results you want. All updated rows would take the same value, or should all get unique values?

    Whatever it is, you don't need the sliders and loop. Just a simple UPDATE statement.

    If each line requires a unique value of the sequence, then

    UPDATE  loads
    SET     prop_id     = prod_id_s.NEXTVAL
    WHERE     prop_id     IS NULL
    ;
    

    If all the lines that have a need to the same value null, then:

    SELECT     prod_id_s.nextval
    INTO     v_prop_id
    FROM     dual;
    
    UPDATE  loads
    SET     prop_id     = v_prop_id
    WHERE     prop_id     IS NULL
    ;
    

    Don't forget to declare v_prop_id as a NUMBER.

    I hope that answers your question.
    If not, post a small example of data (instructions CREATE and INSERT, only relevant columns) for all the tables and the involved sequences and also publish outcomes from these data.
    If you ask on a DML statement, such as UPDATE, the sample data will be the content of the or the tables before the DML, and the results will be the State of the or the tables changed when it's all over.
    Explain, using specific examples, how you get these results from these data.
    Always say what version of Oracle you are using (for example, 11.2.0.2.0).
    See the FAQ forum {message identifier: = 9360002}

  • Bulk Collect and limit the rows

    Hello Oracles,

    I feel a strange (at least to me) behavior with lines in BULK COLLECT and LIMIT.
    For test purposes, I've written a procedure that uses a CURSOR, explicit AND implicit.
    When I use the explicit CURSOR and the LOOP, I use BULK COLLECT and LIMIT lines.
    I do not ROWNUM limit with my SELECT INTO. I know for a fact ROWNUM works very well since the last millennium.
    When I look at the number of rows returned when I put the LIMIT, I get weird number of extractions...

    I recover in a TABLE INDEX BY which is based on a TYPE of ENTRY.
    Here are a few results with different LIMIT values for a small group of key PRIMARIES.
    The figures below are the value of my_table . COUNTY

    Any idea would be apreciated.

    THX

    . .
    Actual number of CURSOR EXPLICIT 470553 PK = 17
    . LOOP IN BULK COLLECT LIMIT 78 retrieves: 17
    .
    Actual number of CURSOR EXPLICIT 100991 PK = 38
    . LOOP IN BULK COLLECT LIMIT 78 retrieves: 38
    .
    Actual number of CURSOR EXPLICIT 100981 PK = 183
    . LOOP IN BULK COLLECT LIMIT 78 retrieves: 27
    .
    Actual number of CURSOR EXPLICIT 101001 PK = 193
    . Retrieves LOOP IN BULK COLLECT LIMIT 78: 37
    .
    Actual number of CURSOR EXPLICIT 101033 PK = 593
    . Excerpt from LOOP BULK COLLECT LIMIT 78: 47

    *************************************************
    Actual number of CURSOR EXPLICIT 470553 PK = 17
    . LOOP IN BULK COLLECT LIMIT 100 retrieves: 17
    .
    Actual number of CURSOR EXPLICIT 100991 PK = 38
    . LOOP IN BULK COLLECT LIMIT 100 retrieves: 38
    .
    Actual number of CURSOR EXPLICIT 100981 PK = 183
    . LOOP IN BULK COLLECT LIMIT 100 retrieves: 83
    .
    Actual number of CURSOR EXPLICIT 101001 PK = 193
    . LOOP IN BULK COLLECT LIMIT 100 retrieves: 93
    .
    Actual number of CURSOR EXPLICIT 101033 PK = 593
    . LOOP IN BULK COLLECT LIMIT 100 retrieves: 93

    *************************************************

    Actual number of CURSOR EXPLICIT 470553 PK = 17
    . LOOP IN BULK COLLECT LIMIT 140 retrieves: 17
    .
    Actual number of CURSOR EXPLICIT 100991 PK = 38
    . LOOP IN BULK COLLECT LIMIT 140 retrieves: 38
    .
    Actual number of CURSOR EXPLICIT 100981 PK = 183
    . LOOP IN BULK COLLECT LIMIT 140 retrieves: 43
    .
    Actual number of CURSOR EXPLICIT 101001 PK = 193
    . LOOP IN BULK COLLECT LIMIT 140 retrieves: 53
    .
    Actual number of CURSOR EXPLICIT 101033 PK = 593
    . LOOP IN BULK COLLECT LIMIT 140 retrieves: 33

    *************************************************
    Actual number of CURSOR EXPLICIT 470553 PK = 17
    . LOOP IN BULK COLLECT LIMIT 183 retrieves: 17
    .
    Actual number of CURSOR EXPLICIT 100991 PK = 38
    . LOOP IN BULK COLLECT LIMIT 183 retrieves: 38
    .
    Actual number of CURSOR EXPLICIT 100981 PK = 183
    . LOOP IN BULK COLLECT LIMIT 183 retrieves: 0
    .
    Actual number of CURSOR EXPLICIT 101001 PK = 193
    . LOOP IN BULK COLLECT LIMIT 183 retrieves: 10
    .
    Actual number of CURSOR EXPLICIT 101033 PK = 593
    . LOOP IN BULK COLLECT LIMIT 183 retrieves: 44

    *************************************************

    Actual number of CURSOR EXPLICIT 470553 PK = 17
    . LOOP IN BULK COLLECT LIMIT 200 retrieves: 17
    .
    Actual number of CURSOR EXPLICIT 100991 PK = 38
    . LOOP IN BULK COLLECT LIMIT 200 retrieves: 38
    .
    Actual number of CURSOR EXPLICIT 100981 PK = 183
    . LOOP IN BULK COLLECT LIMIT 200 retrieves: 183
    .
    Actual number of CURSOR EXPLICIT 101001 PK = 193
    . LOOP IN BULK COLLECT LIMIT 200 retrieves: 193
    .
    Actual number of CURSOR EXPLICIT 101033 PK = 593
    . LOOP IN BULK COLLECT LIMIT 200 retrieves: 193

    *************************************************

    Actual number of CURSOR EXPLICIT 470553 PK = 17
    . LOOP IN BULK COLLECT LIMIT 600 retrieves: 17
    .
    Actual number of CURSOR EXPLICIT 100991 PK = 38
    . LOOP IN BULK COLLECT LIMIT 600 retrieves: 38
    .
    Actual number of CURSOR EXPLICIT 100981 PK = 183
    . LOOP IN BULK COLLECT LIMIT 600 retrieves: 183
    .
    Actual number of CURSOR EXPLICIT 101001 PK = 193
    . LOOP IN BULK COLLECT LIMIT 600 retrieves: 193
    .
    Actual number of CURSOR EXPLICIT 101033 PK = 593
    . LOOP IN BULK COLLECT LIMIT 600 retrieves: 593

    *************************************************
    Actual number of CURSOR EXPLICIT 470553 PK = 17
    . LOOP IN BULK COLLECT LIMIT 593 retrieves: 17
    .
    Actual number of CURSOR EXPLICIT 100991 PK = 38
    . LOOP IN BULK COLLECT LIMIT 593 retrieves: 38
    .
    Actual number of CURSOR EXPLICIT 100981 PK = 183
    . LOOP IN BULK COLLECT LIMIT 593 retrieves: 183
    .
    Actual number of CURSOR EXPLICIT 101001 PK = 193
    . LOOP IN BULK COLLECT LIMIT 593 retrieves: 193
    .
    Actual number of CURSOR EXPLICIT 101033 PK = 593
    . LOOP IN BULK COLLECT LIMIT 593 retrieves: 0

    PL/SQL procedure successfully completed.

    SQL > spool off

    I love a mystery, so I figured out how your code might look like:

    SQL> create table t
      2  as
      3  select case n1
      4         when 1 then 470553
      5         when 2 then 100991
      6         when 3 then 100981
      7         when 4 then 101001
      8         when 5 then 101033
      9         end pk
     10    from (select level n1 from dual connect by level <= 5)
     11       , (select level n2 from dual connect by level <= 593)
     12   where (  (n1 = 1 and n2 <= 17)
     13         or (n1 = 2 and n2 <= 38)
     14         or (n1 = 3 and n2 <= 183)
     15         or (n1 = 4 and n2 <= 193)
     16         or (n1 = 5 and n2 <= 593)
     17         )
     18  /
    
    Tabel is aangemaakt.
    
    SQL> declare
      2    type ta is table of number;
      3    a_limitsizes ta := ta(78,100,140,183,200,600,593);
      4    a_pks ta := ta(470553,100991,100981,101001,101033);
      5    a ta;
      6    l_actualcount number;
      7    cursor c(b number) is select pk from t where pk = b;
      8  begin
      9    for i in a_limitsizes.first .. a_limitsizes.last
     10    loop
     11      for j in a_pks.first .. a_pks.last
     12      loop
     13        l_actualcount := 0;
     14        open c(a_pks(j));
     15        loop
     16          fetch c bulk collect into a limit a_limitsizes(i);
     17          l_actualcount := l_actualcount + a.count;
     18          exit when a.count != a_limitsizes(i);
     19        end loop;
     20        close c;
     21        dbms_output.put_line('PK ' || a_pks(j) || ' EXPLICIT CURSOR Actual Count = ' || l_actualcount);
     22        dbms_output.put_line('. LOOP BULK COLLECT LIMIT ' || a_limitsizes(i) || ' retrieves : ' || a.count);
     23        dbms_output.new_line;
     24      end loop;
     25      dbms_output.put_line('*************************************************');
     26      dbms_output.new_line;
     27    end loop;
     28  end;
     29  /
    PK 470553 EXPLICIT CURSOR Actual Count = 17
    . LOOP BULK COLLECT LIMIT 78 retrieves : 17
    
    PK 100991 EXPLICIT CURSOR Actual Count = 38
    . LOOP BULK COLLECT LIMIT 78 retrieves : 38
    
    PK 100981 EXPLICIT CURSOR Actual Count = 183
    . LOOP BULK COLLECT LIMIT 78 retrieves : 27
    
    PK 101001 EXPLICIT CURSOR Actual Count = 193
    . LOOP BULK COLLECT LIMIT 78 retrieves : 37
    
    PK 101033 EXPLICIT CURSOR Actual Count = 593
    . LOOP BULK COLLECT LIMIT 78 retrieves : 47
    
    *************************************************
    
    PK 470553 EXPLICIT CURSOR Actual Count = 17
    . LOOP BULK COLLECT LIMIT 100 retrieves : 17
    
    PK 100991 EXPLICIT CURSOR Actual Count = 38
    . LOOP BULK COLLECT LIMIT 100 retrieves : 38
    
    PK 100981 EXPLICIT CURSOR Actual Count = 183
    . LOOP BULK COLLECT LIMIT 100 retrieves : 83
    
    PK 101001 EXPLICIT CURSOR Actual Count = 193
    . LOOP BULK COLLECT LIMIT 100 retrieves : 93
    
    PK 101033 EXPLICIT CURSOR Actual Count = 593
    . LOOP BULK COLLECT LIMIT 100 retrieves : 93
    
    *************************************************
    
    PK 470553 EXPLICIT CURSOR Actual Count = 17
    . LOOP BULK COLLECT LIMIT 140 retrieves : 17
    
    PK 100991 EXPLICIT CURSOR Actual Count = 38
    . LOOP BULK COLLECT LIMIT 140 retrieves : 38
    
    PK 100981 EXPLICIT CURSOR Actual Count = 183
    . LOOP BULK COLLECT LIMIT 140 retrieves : 43
    
    PK 101001 EXPLICIT CURSOR Actual Count = 193
    . LOOP BULK COLLECT LIMIT 140 retrieves : 53
    
    PK 101033 EXPLICIT CURSOR Actual Count = 593
    . LOOP BULK COLLECT LIMIT 140 retrieves : 33
    
    *************************************************
    
    PK 470553 EXPLICIT CURSOR Actual Count = 17
    . LOOP BULK COLLECT LIMIT 183 retrieves : 17
    
    PK 100991 EXPLICIT CURSOR Actual Count = 38
    . LOOP BULK COLLECT LIMIT 183 retrieves : 38
    
    PK 100981 EXPLICIT CURSOR Actual Count = 183
    . LOOP BULK COLLECT LIMIT 183 retrieves : 0
    
    PK 101001 EXPLICIT CURSOR Actual Count = 193
    . LOOP BULK COLLECT LIMIT 183 retrieves : 10
    
    PK 101033 EXPLICIT CURSOR Actual Count = 593
    . LOOP BULK COLLECT LIMIT 183 retrieves : 44
    
    *************************************************
    
    PK 470553 EXPLICIT CURSOR Actual Count = 17
    . LOOP BULK COLLECT LIMIT 200 retrieves : 17
    
    PK 100991 EXPLICIT CURSOR Actual Count = 38
    . LOOP BULK COLLECT LIMIT 200 retrieves : 38
    
    PK 100981 EXPLICIT CURSOR Actual Count = 183
    . LOOP BULK COLLECT LIMIT 200 retrieves : 183
    
    PK 101001 EXPLICIT CURSOR Actual Count = 193
    . LOOP BULK COLLECT LIMIT 200 retrieves : 193
    
    PK 101033 EXPLICIT CURSOR Actual Count = 593
    . LOOP BULK COLLECT LIMIT 200 retrieves : 193
    
    *************************************************
    
    PK 470553 EXPLICIT CURSOR Actual Count = 17
    . LOOP BULK COLLECT LIMIT 600 retrieves : 17
    
    PK 100991 EXPLICIT CURSOR Actual Count = 38
    . LOOP BULK COLLECT LIMIT 600 retrieves : 38
    
    PK 100981 EXPLICIT CURSOR Actual Count = 183
    . LOOP BULK COLLECT LIMIT 600 retrieves : 183
    
    PK 101001 EXPLICIT CURSOR Actual Count = 193
    . LOOP BULK COLLECT LIMIT 600 retrieves : 193
    
    PK 101033 EXPLICIT CURSOR Actual Count = 593
    . LOOP BULK COLLECT LIMIT 600 retrieves : 593
    
    *************************************************
    
    PK 470553 EXPLICIT CURSOR Actual Count = 17
    . LOOP BULK COLLECT LIMIT 593 retrieves : 17
    
    PK 100991 EXPLICIT CURSOR Actual Count = 38
    . LOOP BULK COLLECT LIMIT 593 retrieves : 38
    
    PK 100981 EXPLICIT CURSOR Actual Count = 183
    . LOOP BULK COLLECT LIMIT 593 retrieves : 183
    
    PK 101001 EXPLICIT CURSOR Actual Count = 193
    . LOOP BULK COLLECT LIMIT 593 retrieves : 193
    
    PK 101033 EXPLICIT CURSOR Actual Count = 593
    . LOOP BULK COLLECT LIMIT 593 retrieves : 0
    
    *************************************************
    
    PL/SQL-procedure is geslaagd.
    

    Observation of Randolf was right: you are simply watching the last extraction of a series of extractions, which is the modulo / rest.

    Example: If your cursor retrieves a total of 183 ranks with a maximum size of 100, then your loop steps through twice. The first single 100 lines, the second 83. You print only the last extraction and not the sum of all the extractions.

    Kind regards
    Rob.

  • PLS-00201: identifier 'i' must be declared when using BULK COLLECT with FORALL to insert data in 2 tables?

    iHi.

    Declare
       cursor c_1
       is
        select col1,col2,col3,col4
        from table1
    
    
       type t_type is table of c_1%rowtype index by binary_integer;
       v_data t_type;
    BEGIN
       OPEN c_1;
       LOOP
          FETCH c_1 BULK COLLECT INTO v_data LIMIT 200;
          EXIT WHEN v_data.COUNT = 0;
          FORALL i IN v_data.FIRST .. v_data.LAST
             INSERT INTO xxc_table
               (col1,
                col3,
                col4
               )
                SELECT v_data (i).col1,
                       v_data (i).col3,
                       v_data (i).col4
                  FROM DUAL
                 WHERE NOT EXISTS
                              (SELECT 1
                                 FROM xxc_table a
                                WHERE col1=col1
                                      .....
                              );
                         --commit;
             INSERT INTO xxc_table1
               (col1,
               col2,
              col3,
              col4
               )
                SELECT v_data (i).col1,
                       v_data (i).col2,
                       v_data (i).col3,
                       'Y'
                  FROM DUAL
                 WHERE NOT EXISTS
                              (SELECT 1
                                 FROM xxc_table1 a
                                WHERE col1=col1
          .....
         );
    
    
           --exit when c_1%notfound;
       END LOOP;
       CLOSE c_1;
       commit;
    END;
    
    
    
    
    
    
    
    

    I get 40/28-PLS-00201: identifier 'I' must be declared what the problem in the above code please help me and I have lakhs of data

    Thank you

    Post edited by: Rajesh123 I changed IDX

    Post edited by: Rajesh123 changed t_type c_1 in Fetch

    But by using a SET of INSERT to insert into two tables at once in the same query would do the job without any collection of bulk of PL and avoid to query two times too.

    for example, as a single INSERT...

    SQL > create table table1 as
    2. Select 1 as col1, col2 of 1, 1 as col3, 1 as col4 Union double all the
    3 select 2,2,2,2 of all the double union
    4 Select 3,3,3,3 Union double all the
    5 Select 4,4,4,4 of all the double union
    6 select 5,5,5,5 of all the double union
    7 select 6,6,6,6 of all the double union
    8 select 7,7,7,7 of all the double union
    9 select 8,8,8,8 of all the double union
    10. Select 9,9,9,9 to the Union double all the
    11. Select double 10,10,10,10
    12.

    Table created.

    SQL > create table xxc_table like
    2. Select 1 as col1, col3 2, 3 as col4 Union double all the
    3. Select the 3, 4, 5 Union double all the
    4. Select the 5, 6, 7 double
    5.

    Table created.

    SQL > create table xxc_table1 like
    2. Select 3 as col1, col2, col3, 5 4 "n" as col4 Union double all the
    3. Select the 6, 7, 8, double "n"
    4.

    Table created.

    SQL > insert all
    2 when the xt_insert is null then
    3 in xxc_table (col1, col3, col4)
    4 values (col1, col3, col4)
    5 when the xt1_insert is null then
    6 in xxc_table1 (col1, col2, col3, col4)
    7 values (col1, col2, col3, 'Y')
    8. Select t1.col1 t1.col2, t1.col3, t1.col4
    9, xt.col1 as xt_insert
    10, xt1.col1 as xt1_insert
    11 from table1 t1
    12 left join external xxc_table xt (t1.col1 = xt.col1)
    13 left xt1 xxc_table1 outer join (t1.col1 = xt1.col1)
    14.

    15 rows created.

    SQL > select * from xxc_table by 1.
    COL1 COL3 COL4
    ---------- ---------- ----------
    1          2          3
    2          2          2
    3          4          5
    4          4          4
    5          6          7
    6          6          6
    7          7          7
    8          8          8
    9          9          9
    10-10-10

    10 selected lines.

    SQL > select * from xxc_table1 by 1.

    COL1 COL2 COL3 C
    ---------- ---------- ---------- -
    1          1          1 Y
    2          2          2 Y
    3          4          5 N
    4          4          4 Y
    5          5          5 Y
    6          7          8 N
    7          7          7 Y
    8          8          8 Y
    9          9          9 Y
    10-10-10

    10 selected lines.

    SQL >

Maybe you are looking for

  • This connection site is not approved

    Under options, advanced, certificates, see certified, I deleted all the servers in the servers tab, now every Web site I'll I get "this connection site is not approved. Web sites, even when I add an exception will come up only text mode. I removed mo

  • bounced emails don't come back for me

    I send a lot of emails and to keep my lists of specific e-mail, I need to know which emails bounce for me. From the last 2 weeks that I don't get all bounced emails. I sent several tests of fake emails and they don't come back as well. What is the so

  • can a macbook 2008 move to el capitan

    a 2008 update tp el capitan mac book does

  • Problems with batch synchronization

    Hello I have a model with 4 Sockets when I launch a sequence (screenshot attached) and using the synchronization of the lot. Batch all the settings in "Settings" in the column are set to "Only a Threead". Each of the three "Batch" Enter "-synchroniza

  • HP 7200 VS PRO 8600: Differences in file size

    I have two HP products and I use for scanning documents.  When I scan a similar document in two sizes of the file scanners are radically different.    The size fo the product HP Photsmart C7200 file and the excellent scan file that is very clear and