UNION operator with BULK COLLECT for one type of collection

Hi all

I created a table as given below:
create or replace type coltest is table of number;

Here are 3 blocks PL/SQL that populate the data in variables of the above mentioned type of table:


BLOCK 1:

DECLARE
col1 coltest: = coltest (1, 2, 3, 4, 5, 11);
col2 coltest: = coltest (6, 7, 8, 9, 10);
COL3 coltest: = coltest();

BEGIN

SELECT * BULK COLLECT
IN col1
FROM (SELECT *)
TABLE (CAST (coltest AS col1))
UNION ALL
SELECT * FROM TABLE (CAST (col2 AS coltest)));

dbms_output.put_line ('col1');
dbms_output.put_line ('col1.count: ' | col1.) (COUNT);

BECAUSE me in 1... col1. COUNTY
LOOP
dbms_output.put_line (col1 (i));
END LOOP;

END;

OUTPUT:
col1
col1. Count: 5
6
7
8
9
10



BLOCK 2:

DECLARE
col1 coltest: = coltest (1, 2, 3, 4, 5, 11);
col2 coltest: = coltest (6, 7, 8, 9, 10);
COL3 coltest: = coltest();

BEGIN
SELECT * BULK COLLECT
IN col2
FROM (SELECT *)
TABLE (CAST (coltest AS col1))
UNION ALL
SELECT * FROM TABLE (CAST (col2 AS coltest)));

dbms_output.put_line ('col2');
dbms_output.put_line ('col2.count: ' | col2.) (COUNT);

BECAUSE me in 1... col2. COUNTY
LOOP
dbms_output.put_line (col2 (i));
END LOOP;
END;

OUTPUT:
col2
col2. Count: 6
1
2
3
4
5
11

BLOCK 3:

DECLARE
col1 coltest: = coltest (1, 2, 3, 4, 5, 11);
col2 coltest: = coltest (6, 7, 8, 9, 10);
COL3 coltest: = coltest();

BEGIN

SELECT * BULK COLLECT
IN col3
FROM (SELECT *)
TABLE (CAST (coltest AS col1))
UNION ALL
SELECT * FROM TABLE (CAST (col2 AS coltest)));

dbms_output.put_line ('col3');
dbms_output.put_line ('col3.count: ' | col3.) (COUNT);

BECAUSE me in 1... Col3. COUNTY
LOOP
dbms_output.put_line (COL3 (i));
END LOOP;
END;


OUTPUT:

COL3
Col3.Count: 11
1
2
3
4
5
11
6
7
8
9
10

Can someone explain please the output of the BLOCK 1 and 2? Why not in bulk collect in col1 and col2 11 return as County?

If I remember correctly, the part INTO the query to initialize the collection in which it will collect the data, and you gather in the collections that you are querying, you end up deleting the data out of this collection until she is interrogated.

Not really, wise trying to collect data in a collection that you are querying.

Tags: Database

Similar Questions

  • Problem with BULK collect and variable of Table type

    Hi all
    I defined a record type and then set an index - by table of this record type and in bulk has collected the data as shown in the code below. All this was done in an anonymous block.

    Then when I tried to set the record as an object type and not the above activities type, I got the below error:

    ORA-06550: line 34, column 6:
    PL/SQL: ORA-00947: not enough values
    ORA-06550: line 31, column 4:
    PL/SQL: SQL statement ignored

    Could you help me get the result of the first scenario with record type defined as an object?
    /* Formatted on 2009/08/03 17:01 (Formatter Plus v4.8.8) */
    DECLARE
       TYPE obj_attrib IS TABLE OF num_char_object_1
          INDEX BY PLS_INTEGER;
    
       obj_var   obj_attrib;
    
       TYPE num_char_record IS RECORD (
          char_attrib   VARCHAR2 (100),
          num_attrib    NUMBER
       );
    
       TYPE rec_attrib IS TABLE OF num_char_record
          INDEX BY PLS_INTEGER;
    
       rec_var   rec_attrib;
    BEGIN
       SELECT first_name,
              employee_id
       BULK COLLECT INTO rec_var
         FROM employees
        WHERE ROWNUM <= 10;
    
       FOR iloop IN rec_var.FIRST .. rec_var.LAST
       LOOP
          DBMS_OUTPUT.put_line (
             'Loop.' || iloop || rec_var (iloop).char_attrib || '###'
             || rec_var (iloop).num_attrib
          );
       END LOOP;
    
       SELECT first_name,
              employee_id
       BULK COLLECT INTO obj_var
         FROM employees
        WHERE ROWNUM <= 10;
    END;
    Here's the code for num_char_object_1
    CREATE OR REPLACE TYPE NUM_CHAR_OBJECt_1 IS OBJECT (
       char_attrib   VARCHAR2 (100),
       num_attrib    NUMBER
    );

    Welcome to the forum!

    You should be collecting objects in bulk, something like

    SELECT NUM_CHAR_OBJECt_1  (first_name,
              employee_id)
       BULK COLLECT INTO obj_var
         FROM emp
        WHERE ROWNUM <= 10;
    
  • Problem with bulk collect

    HII All,
    I am facing a problem with in bulk collect unable to identify where my code is wrong. When I try to run the code below its getting hanged and thus leading to the end of the session. Please help me.

    Here I am providing examples of data.

    CREATE TABLE R_DUMMY
       (FA_FAC_OS NUMBER(34,14), 
      FAC_ID VARCHAR2(10) NOT NULL, 
      SYSTEM_ID NUMBER(6,0) NOT NULL, 
      WRKNG_CPY VARCHAR2(1) NOT NULL, 
      CA_ID VARCHAR2(16) NOT NULL, 
      FA_PRNT_FAC_ID VARCHAR2(10)
       );
    
    insert into r_dummy (FA_FAC_OS, FAC_ID, SYSTEM_ID, WRKNG_CPY, CA_ID, FA_PRNT_FAC_ID)
    values (10000.00000000000000, 'FA000001', 1, 'C', 'CA2001/11/0002', '');
    
    insert into r_dummy (FA_FAC_OS, FAC_ID, SYSTEM_ID, WRKNG_CPY, CA_ID, FA_PRNT_FAC_ID)
    values (500.00000000000000, 'FA000005', 1, 'C', 'CA2001/11/0002', '');
    
    insert into r_dummy (FA_FAC_OS, FAC_ID, SYSTEM_ID, WRKNG_CPY, CA_ID, FA_PRNT_FAC_ID)
    values (-500.00000000000000, 'FA000008', 1, 'C', 'CA2001/11/0002', '');
    
    insert into r_dummy (FA_FAC_OS, FAC_ID, SYSTEM_ID, WRKNG_CPY, CA_ID, FA_PRNT_FAC_ID)
    values (600.00000000000000, 'FA000013', 1, 'C', 'CA2001/11/0002', '');
    
    insert into r_dummy (FA_FAC_OS, FAC_ID, SYSTEM_ID, WRKNG_CPY, CA_ID, FA_PRNT_FAC_ID)
    values (600.00000000000000, 'FA000018', 1, 'C', 'CA2001/11/0002', '');
    
    insert into r_dummy (FA_FAC_OS, FAC_ID, SYSTEM_ID, WRKNG_CPY, CA_ID, FA_PRNT_FAC_ID)
    values (700.00000000000000, 'FA000020', 1, 'C', 'CA2001/11/0002', '');
    
    insert into r_dummy (FA_FAC_OS, FAC_ID, SYSTEM_ID, WRKNG_CPY, CA_ID, FA_PRNT_FAC_ID)
    values (1200.00000000000000, 'FA000022', 1, 'C', 'CA2001/11/0002', '');
    
    CREATE TABLE R_DUMMY_1
       (FA_FAC_OS NUMBER(34,14), 
         FAC_ID VARCHAR2(10) NOT NULL,
         SYSTEM_ID NUMBER(6,0) NOT NULL,
         VER_NUM NUMBER(4,2) NOT NULL
       );
    
    insert into r_dummy_1 (FA_FAC_OS, FAC_ID, SYSTEM_ID, VER_NUM)
    values (10000.00000000000000, 'FA000001', 1, 3.00);
    
    insert into r_dummy_1 (FA_FAC_OS, FAC_ID, SYSTEM_ID, VER_NUM)
    values (10000.00000000000000, 'FA000001', 1, 2.00);
    
    insert into r_dummy_1 (FA_FAC_OS, FAC_ID, SYSTEM_ID, VER_NUM)
    values (10000.00000000000000, 'FA000001', 1, 1.00);
    
    insert into r_dummy_1 (FA_FAC_OS, FAC_ID, SYSTEM_ID, VER_NUM)
    values (500.00000000000000, 'FA000005', 1, 3.00);
    
    insert into r_dummy_1 (FA_FAC_OS, FAC_ID, SYSTEM_ID, VER_NUM)
    values (500.00000000000000, 'FA000005', 1, 2.00);
    
    insert into r_dummy_1 (FA_FAC_OS, FAC_ID, SYSTEM_ID, VER_NUM)
    values (500.00000000000000, 'FA000005', 1, 1.00);
    
    insert into r_dummy_1 (FA_FAC_OS, FAC_ID, SYSTEM_ID, VER_NUM)
    values (-500.00000000000000, 'FA000008', 1, 3.00);
    
    insert into r_dummy_1 (FA_FAC_OS, FAC_ID, SYSTEM_ID, VER_NUM)
    values (-500.00000000000000, 'FA000008', 1, 2.00);
    And my block of pl sql
    Set serveroutput on;
    
    Declare
              vPkgCaId          r_dummy.ca_id%type          := 'CA2001/11/0002';
              vPkgSystemId     r_dummy.system_id%type     := 1;
              vPkgWrkFlg          r_dummy.WRKNG_CPY%type     :=  'C';
    
              
    
              
              type t_type is object
                                       (
                                       v_FA_FAC_OS     r_dummy.FA_FAC_OS%type,
                                       v_FAC_ID     r_dummy.FAC_ID%type,
                                       v_SYSTEM_ID     r_dummy.SYSTEM_ID%type,
                                       v_ver_num     r_dummy_1.ver_num%type
                                       );
    
              type t_col_tbl is table of t_type index by binary_integer;
              
              l_col_tbl     t_col_tbl;
              
              
              
              
              
              --fac_id,system_id,ver_num is composite primary key for CP_CA_FAC_VER
    Begin
              
                        SELECT     fac.FA_FAC_OS,fac.FAC_ID,fac.SYSTEM_ID,ver.ver_num
                        bulk collect into l_col_tbl
                        FROM     r_dummy fac,r_dummy_1 ver
                        WHERE     fac.fac_id = ver.fac_id
                        and fac.system_id = ver.system_id
                        and fac.CA_ID = vPkgCaId
                        AND fac.SYSTEM_ID = vPkgSystemId
                        AND fac.WRKNG_CPY = vPkgWrkFlg
                        START WITH fac.CA_ID = vPkgCaId
                                  AND fac.SYSTEM_ID = vPkgSystemId
                                  AND fac.WRKNG_CPY = vPkgWrkFlg AND fac.FA_PRNT_FAC_ID IS NULL
                        CONNECT BY PRIOR fac.FAC_ID = fac.FA_PRNT_FAC_ID
                                  AND fac.SYSTEM_ID = vPkgSystemId
                                  AND fac.WRKNG_CPY = vPkgWrkFlg;
                        
              
              forall i in 1..l_col_tbl.count
    
                   
                   update     r_dummy_1 ver
                   set          ver.FA_FAC_OS           = l_col_tbl(i).v_FA_FAC_OS
                   where     fac_id                    = l_col_tbl(i).v_FAC_ID
                             and system_id          = l_col_tbl(i).v_system_id
                             and ver_num               = l_col_tbl(i).v_ver_num
                             ;
              
    
              Commit;
    
    End;
    /
    Please help me. I was able to do to help collect cursor instead in bulk, but think that bulk collect will result in better performance. Please suggest if my code needs no changes.


    Concerning
    Rambeau

    I'd rather do it right SQL which is much faster that COLLECT in BULK

     UPDATE r_dummy_1 ver
        SET ver.FA_FAC_OS =
            (
              SELECT fa_fac_os
                FROM (
                        SELECT fac.FA_FAC_OS,fac.FAC_ID,fac.SYSTEM_ID,ver.ver_num
                          FROM r_dummy fac,r_dummy_1 ver
                         WHERE fac.fac_id = ver.fac_id
                           and fac.system_id = ver.system_id
                           and fac.CA_ID = vPkgCaId
                           AND fac.SYSTEM_ID = vPkgSystemId
                           AND fac.WRKNG_CPY = vPkgWrkFlg
                         START WITH fac.CA_ID = vPkgCaId
                           AND fac.SYSTEM_ID = vPkgSystemId
                           AND fac.WRKNG_CPY = vPkgWrkFlg
                           AND fac.FA_PRNT_FAC_ID IS NULL
                       CONNECT BY PRIOR fac.FAC_ID = fac.FA_PRNT_FAC_ID
                           AND fac.SYSTEM_ID = vPkgSystemId
                           AND fac.WRKNG_CPY = vPkgWrkFlg
                     ) t
               WHERE t.fac_id = ver.fac_id
                 AND t.system_id = ver.system_id
                 AND t.ver_num = ver.ver_num
            )
      WHERE EXISTS
            (
              SELECT fa_fac_os
                FROM (
                        SELECT fac.FA_FAC_OS,fac.FAC_ID,fac.SYSTEM_ID,ver.ver_num
                          FROM r_dummy fac,r_dummy_1 ver
                         WHERE fac.fac_id = ver.fac_id
                           and fac.system_id = ver.system_id
                           and fac.CA_ID = vPkgCaId
                           AND fac.SYSTEM_ID = vPkgSystemId
                           AND fac.WRKNG_CPY = vPkgWrkFlg
                         START WITH fac.CA_ID = vPkgCaId
                           AND fac.SYSTEM_ID = vPkgSystemId
                           AND fac.WRKNG_CPY = vPkgWrkFlg
                           AND fac.FA_PRNT_FAC_ID IS NULL
                       CONNECT BY PRIOR fac.FAC_ID = fac.FA_PRNT_FAC_ID
                           AND fac.SYSTEM_ID = vPkgSystemId
                           AND fac.WRKNG_CPY = vPkgWrkFlg
                     ) t
               WHERE t.fac_id = ver.fac_id
                 AND t.system_id = ver.system_id
                 AND t.ver_num = ver.ver_num
            )      
    
  • Using the slider for and BULK COLLECT INTO

    Hi all
    in this case we prefer to use the cursor AND the cursor with the LOOSE COLLECTION? The following contains two block this same query where used FOR the slider, the other is using COLLECT LOOSE. The task that is running better given in the existing? How do we measure performance between these two?

    I use the example of HR schema:
    declare
    l_start number;
    BEGIN
    l_start:= DBMS_UTILITY.get_time;
    dbms_lock.sleep(1);
    FOR employee IN (SELECT e.last_name, j.job_title FROM employees e,jobs j 
    where e.job_id=j.job_id and  e.job_id LIKE '%CLERK%' AND e.manager_id > 120 ORDER BY e.last_name)
    LOOP
      DBMS_OUTPUT.PUT_LINE ('Name = ' || employee.last_name || ', Job = ' || employee.job_title);
    END LOOP;
    DBMS_OUTPUT.put_line('total time: ' || to_char(DBMS_UTILITY.get_time - l_start) || ' hsecs');
    END;
    /
     
    declare
    l_start number;
    type rec_type is table of varchar2(20);
    name_rec rec_type;
    job_rec rec_type;
    begin
    l_start:= DBMS_UTILITY.get_time;
    dbms_lock.sleep(1);
    SELECT e.last_name, j.job_title bulk collect into name_rec,job_rec FROM employees e,jobs j 
    where e.job_id=j.job_id and  e.job_id LIKE '%CLERK%' AND e.manager_id > 120 ORDER BY e.last_name;
    for j in name_rec.first..name_rec.last loop
      DBMS_OUTPUT.PUT_LINE ('Name = ' || name_rec(j) || ', Job = ' || job_rec(j));
    END LOOP;
    DBMS_OUTPUT.put_line('total time: ' || to_char(DBMS_UTILITY.get_time - l_start) || ' hsecs');
    end;
    /
    In this code, I put a timestamp in each block, but they are useless, since they both launched virtually instantaneous...

    Best regards
    Val

    (1) bulk fired fresh primary use is to reduce the change of context between sql and pl sql engine.
    (2), you should always use LIMIT when it comes with bulk collect, this does not increase the load on the PGA.
    (3) and the ideal number of BOUNDARY lines is 100.

    Also if you really want to compare performance improvements between the two different approaches to sql pl try to use the package of runstats tom Kyte

    http://asktom.Oracle.com/pls/Apex/asktom.download_file?p_file=6551378329289980701

  • Cursor Bulk Collect

    Hi all
    I have a doubt about cursor in block collection. Both versions work correctly.

    1st Version: 
    DECLARE
      CURSOR c1 IS (SELECT t2 FROM test10);
      TYPE typ_tbl IS TABLE OF c1%rowtype;
      v typ_tbl;
    BEGIN
      OPEN c1;  
        FETCH c1 BULK COLLECT INTO v;
      CLOSE c1;
      FOR i IN v.first..v.last
      LOOP
        DBMS_OUTPUT.PUT_LINE(v(i).t2);
      END LOOP;  
    END;
    
    2nd version: 
    DECLARE
      CURSOR c1 IS (SELECT t2 FROM test10);
      TYPE typ_tbl IS TABLE OF c1%rowtype;
      v typ_tbl;
    BEGIN
      OPEN c1;
      LOOP                                                 --Loop added
        FETCH c1 BULK COLLECT INTO v;
        EXIT WHEN c1%NOTFOUND; 
      END LOOP;
      CLOSE c1;
      FOR i IN v.first..v.last
      LOOP
        DBMS_OUTPUT.PUT_LINE(v(i).t2);
      END LOOP;  
    END;
    Is it necessary to have a loop and exit the statementwhen notfound cursor used with bulk collect?

    Published by: SamFisher February 14, 2012 13:26

    SamFisher wrote:
    for idx in (SELECT FROM test10 t2)
    loop

    end loop;

    But in doing so, context switches will be more aint't it?

    Yes, you will need to switch between SQL and PL/SQL, which is one of the penalties you pay to use the PL/SQL when SQL is necessary (don't tell is your case here, just a generality). However, that there are compromises. Memory is a limited product, it is expensive and you do not have an infinite amount. It is not as effective to spend many thousands of records in memory and then apply to processes that together, if you bulk load a bit, process a little you avoid exhaust any given resource (CPU, memory, etc...).

    As I said, Oracle will use a fetch array of 100, instead of X, where X is the number of records your query when you use a bulk collect without LIMITS. If you had to carry 10 000 magazines across the room, do you think you would sooner do 100 at a time and do a lot of travel or find a way to carry all the 10,000 in a trip?

    If you are interested in the analysis comparative differences, set up a simple table and load 100,500,1000,10000,100000, etc... documents in it and see what look like processing times.

    For versatile use, the implicit cursor loop wins hands down for each perspective (my opinion). Yet once again, assuming that you NEED pl/sql at all.

    Published by: Tubby on February 14, 2012 11:41

  • Fetch Bulk collect Insert error

    CREATE OR REPLACE PROCEDURE bulk_collect_limit (StartRowOptional in NUMBER, EndRowOptional number, fetchsize in NUMBER)

    IS

    SID TYPE TABLE IS NUMBER;

    Screated_date TYPE IS an ARRAY OF DATE;

    Slookup_id TYPE TABLE IS NUMBER;

    Surlabasedesdonneesdufabricantduballast ARRAY TYPE IS VARCHAR2 (50);

    l_sid sid;

    l_screated_date screated_date;

    l_slookup_id slookup_id;

    l_sdata surlabasedesdonneesdufabricantduballast;

    l_start NUMBER;

    ID IS of SELECT CURSOR of c_data, created_date, lookup_id, data1 FROM big_table WHERE id > = StartRowOptional AND id < = EndRowOptional;

    Reclist TYPE IS an ARRAY OF c_data % ROWTYPE;

    reclist REB;

    BEGIN

    l_start: = DBMS_UTILITY.get_time;

    OPEN c_data;

    LOOP

    Fetch the c_data COLLECT in BULK IN CER LIMIT fetchsize;

    BECAUSE me IN REB. FIRST... REB. LAST

    LOOP

    INSERT INTO values big_table2 (REB (i) user.user, REB (i) .created_date, recs (i) .lookup_id, (i) recs .data1);

    END LOOP;

    OUTPUT WHEN c_data % NOTFOUND;

    END LOOP;

    C_data CLOSE;

    COMMIT;

    Dbms_output.put_line ('Total elapsed:-' |) (DBMS_UTILITY.get_time - l_start) | "hsecs");

    EXCEPTION

    WHILE OTHERS THEN

    LIFT;

    END;

    /

    DISPLAY ERRORS;

    WARNING: the execution is completed with warning

    29/87 PLS-00302: component "DATA1" must be declared

    29/87 PL/SQL: ORA-00984: column not allowed here

    29/6 PL/SQL: statement ignored

    I get the error error above in the insert statement.

    Please can I get help to solve.

    I won't answer your question, but say you something else - do not do this with bulk collect. Do it in a single SQL statement.

    Stop using loops and by engaging in loops.

    Who will solve the error, makes it less likely, you get error ORA-01555, create less recovery and be more effective.

    Oh, and it does nothing useful:

    EXCEPTION

    WHILE OTHERS THEN

    LIFT;

    The entire procedure should be:

    CREATE OR REPLACE PROCEDURE bulk_collect_limit (startrow IN NUMBER,endrow IN NUMBER,fetchsize IN NUMBER)
    IS
    
     l_start NUMBER;
    
    begin
    
    insert into big_table2(put a column list here for crikey's sake)
    select id,created_date,lookup_id,data1 FROM big_table WHERE id >= startrow AND id <= endrow;
    
    DBMS_OUTPUT.put_line('Total Elapsed Time :- ' || (DBMS_UTILITY.get_time - l_start) || ' hsecs');
    
    end;
    
  • Error using BULK collect with RECORD TYPE

    Hello

    I wrote a simple procedure to declare a record type & then by a variable of type NESTED table.

    I then selects the data using COLLECT in BULK & trying to access it via a LOOP... We get an ERROR.

    ------------------------------------------------------------------------------------------------------------------------------------------------------

    CREATE OR REPLACE PROCEDURE sp_test_bulkcollect
    IS

    TYPE rec_type () IS RENDERING
    emp_id VARCHAR2 (20).
    level_id NUMBER
    );

    TYPE v_rec_type IS TABLE OF THE rec_type;

    BEGIN

    SELECT employe_id, level_id
    LOOSE COLLECTION v_rec_type
    OF portfolio_exec_level_mapping
    WHERE portfolio_execp_id = 2851852;

    FOR indx IN v_rec_type. FIRST... v_rec_type. LAST
    LOOP

    dbms_output.put_line ('Emp-' | v_rec_type.emp_id (indx) |) » '|| v_rec_type.level_id (indx));

    END LOOP;

    END;
    -----------------------------------------------------------------------------------------------------------------------------------

    Here is the ERROR I get...


    -Errors of compilation for the PROCEDURE DOMRATBDTESTUSER. SP_TEST_BULKCOLLECT

    Error: PLS-00321: expression "V_REC_TYPE" is not appropriate for the left side of an assignment statement
    Online: 15
    Text: IN portfolio_exec_level_mapping

    Error: PL/SQL: ORA-00904: invalid identifier
    Online: 16
    Text: WHERE portfolio_execp_id = 2851852;

    Error: PL/SQL: statement ignored
    Line: 14
    Text: COLLECT LOOSE v_rec_type

    Error: PLS-00302: component 'FIRST' must be declared
    Online: 19
    Text: LOOP

    Error: PL/SQL: statement ignored
    Online: 19
    Text: LOOP
    ------------------------------------------------------------------------------------------------

    Help PLZ.

    and with a complete code example:

    SQL> CREATE OR REPLACE PROCEDURE sp_test_bulkcollect
      2  IS
      3  TYPE rec_type IS RECORD (
      4  emp_id VARCHAR2(20),
      5  level_id NUMBER
      6  );
      7  TYPE v_rec_type IS TABLE OF rec_type;
      8  v v_rec_type;
      9  BEGIN
     10     SELECT empno, sal
     11     BULK COLLECT INTO v
     12     FROM emp
     13     WHERE empno = 7876;
     14     FOR indx IN v.FIRST..v.LAST
     15     LOOP
     16        dbms_output.put_line('Emp -- '||v(indx).emp_id||' '||v(indx).level_id);
     17     END LOOP;
     18  END;
     19  /
    
    Procedure created.
    
    SQL>
    SQL> show error
    No errors.
    SQL>
    SQL> begin
      2     sp_test_bulkcollect;
      3  end;
      4  /
    Emp -- 7876 1100
    
    PL/SQL procedure successfully completed.
    
  • In BULK COLLECT with loop for?

    Let's say I have a table with a column number and a date column.

    I also have a query using a number and a date as a parameter. I want to run this query for each row in the table (using the column date and number of lines as parameters). If my thinking is looping through the rows and the query parameters for each line. It becomes a large number of queries select. Is it possible to use bulk collect here? All the examples I've seen where he uses a predefined cursor. Also, I don't know how to fix this problem without a cursor.

    Here's the query I want to do for each line:
    select * from reading_values rv, 
        (select * from 
           (select id, datereading
                  from readings,
                  where datereading < p_date and pointid = p_id
                  order by datereading desc)
           where rownum <= 1) t
         where rv.reading_id = t.id

    After reading your initial statement 3 times I simply add a third table to select it. I call this tableX colb and two columns cola (number) (date).

    select rv.*, r.*, row_number() over (partition by r.pointid  order by r.datereading desc) rn
    from reading_values rv
    join readings r on rv.reading_id = r.id
    join tableX x on x.colA = r.pointid and  r.datereading < x.colB
    order by  r.pointid, datereading desc
    

    You can restrict which to return only one row for each data point by using the column rn I already added.

    select * /* probably need sinlge column names here to avoid duplications */
    from (
       select rv.*, r.*, row_number() over (partition by r.pointid  order by r.datereading desc) rn
       from reading_values rv
       join readings r on rv.reading_id = r.id
       join tableX x on x.colA = r.pointid and  r.datereading < x.colB
    )
    where rn = 1
    order by pointid, datereading desc
    

    Published by: Sven w. on June 23, 2010 16:47

  • Exception handlers in bulk collect and for all operations?

    Hello world

    My version of DB is

    BANNER

    ----------------------------------------------------------------

    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi

    PL/SQL Release 10.2.0.1.0 - Production

    CORE 10.2.0.1.0 Production

    AMT for Linux: Version 10.2.0.1.0 - Production

    NLSRTL Version 10.2.0.1.0 - Production

    My question is, what are the possible exception handlers can add us in a bulk collect and for all operations?

    When we use for all, we add except exception and sql % bulk_exceptions. But apart from that what can we add to bulk collect?

    Kind regards

    BS2012.

    Save stores Exception all the exceptions that occur during in bulk in a collection of treatment and at the end of the most much treatment raises an exception. The SQL % BULK_EXCEPTIONS collection has all exceptions. It's the right way to handle the exception during treatment in bulk. And that's all you need. Don't know what else await you.

  • Using bulk collect into with assistance from the limit to avoid the TEMP tablespace error run out?

    Hi all

    I want to know if using bulk collect into limit will help to avoid the TEMP tablespace error run out.

    We use Oracle 11 g R1.

    I am assigned to a task of creating journal facilitated for all tables in a query of the APEX.

    I create procedures to execute some sql statements to create a DEC (Create table select), and then fires on these tables.

    We have about three tables with more than 26 million records.

    It seems very well running until we reached a table with more than 15 million record, we got an error says that Miss tablespace TEMP.

    I googled on this topic and retrieve the tips:

    Use NO LOG

    Parallel use

    BULK COLLECT INTO limited

    However, the questions for those above usually short-term memory rather than running out of TEMPORARY tablespace.

    I'm just a junior developer and does not have dealed with table more than 10 million documents at a time like this before.

    The database support is outsourced. If we try to keep it as minimal contact with the DBA as possible. My Manager asked me to find a solution without asking the administrator to extend the TEMP tablespace.

    I wrote a few BULK COLLECT INTO to insert about 300,000 like once on the development environment. It seems.

    But the code works only against a 000 4000 table of records. I am trying to add more data into the Test table, but yet again, we lack the tablespace on DEV (this time, it's a step a TEMP data)

    I'll give it a go against the table of 26 million records on the Production of this weekend. I just want to know if it is worth trying.

    Thanks for reading this.

    Ann

    I really need check that you did not have the sizes of huge line (like several K by rank), they are not too bad at all, which is good!

    A good rule of thumb to maximize the amount of limit clause, is to see how much memory you can afford to consume in the PGA (to avoid the number of calls to the extraction and forall section and therefore the context switches) and adjust the limit to be as close to that amount as possible.

    Use the routines below to check at what threshold value would be better suited for your system because it depends on your memory allocation and CPU consumption.  Flexibility, based on your limits of PGA, as lines of length vary, but this method will get a good order of magnitude.

    CREATE OR REPLACE PROCEDURE show_pga_memory (context_in IN VARCHAR2 DEFAULT NULL)

    IS

    l_memory NUMBER;

    BEGIN

    SELECT st. VALUE

    IN l_memory

    SYS.v_$ session se, SYS.v_$ sesstat st, SYS.v_$ statname nm

    WHERE se.audsid = USERENV ('SESSIONID')

    AND st.statistic # nm.statistic = #.

    AND themselves. SID = st. SID

    AND nm.NAME = 'pga session in memory. "

    Dbms_output.put_line (CASE

    WHEN context_in IS NULL

    THEN NULL

    ELSE context_in | ' - '

    END

    || 'Used in the session PGA memory ='

    || To_char (l_memory)

    );

    END show_pga_memory;

    DECLARE

    PROCEDURE fetch_all_rows (limit_in IN PLS_INTEGER)

    IS

    CURSOR source_cur

    IS

    SELECT *.

    FROM YOUR_TABLE;

    TYPE source_aat IS TABLE OF source_cur % ROWTYPE

    INDEX BY PLS_INTEGER;

    l_source source_aat;

    l_start PLS_INTEGER;

    l_end PLS_INTEGER;

    BEGIN

    DBMS_SESSION.free_unused_user_memory;

    show_pga_memory (limit_in |) "- BEFORE"); "."

    l_start: = DBMS_UTILITY.get_cpu_time;

    OPEN source_cur.

    LOOP

    EXTRACTION source_cur

    LOOSE COLLECTION l_source LIMITED limit_in;

    WHEN l_source EXIT. COUNT = 0;

    END LOOP;

    CLOSE Source_cur;

    l_end: = DBMS_UTILITY.get_cpu_time;

    Dbms_output.put_line (' elapsed time CPU for limit of ')

    || limit_in

    || ' = '

    || To_char (l_end - l_start)

    );

    show_pga_memory (limit_in |) "- AFTER");

    END fetch_all_rows;

    BEGIN

    fetch_all_rows (20000);

    fetch_all_rows (40000);

    fetch_all_rows (60000);

    fetch_all_rows (80000);

    fetch_all_rows (100000);

    fetch_all_rows (150000);

    fetch_all_rows (250000);

    -etc.

    END;

  • PLS-00201: identifier 'i' must be declared when using BULK COLLECT with FORALL to insert data in 2 tables?

    iHi.

    Declare
       cursor c_1
       is
        select col1,col2,col3,col4
        from table1
    
    
       type t_type is table of c_1%rowtype index by binary_integer;
       v_data t_type;
    BEGIN
       OPEN c_1;
       LOOP
          FETCH c_1 BULK COLLECT INTO v_data LIMIT 200;
          EXIT WHEN v_data.COUNT = 0;
          FORALL i IN v_data.FIRST .. v_data.LAST
             INSERT INTO xxc_table
               (col1,
                col3,
                col4
               )
                SELECT v_data (i).col1,
                       v_data (i).col3,
                       v_data (i).col4
                  FROM DUAL
                 WHERE NOT EXISTS
                              (SELECT 1
                                 FROM xxc_table a
                                WHERE col1=col1
                                      .....
                              );
                         --commit;
             INSERT INTO xxc_table1
               (col1,
               col2,
              col3,
              col4
               )
                SELECT v_data (i).col1,
                       v_data (i).col2,
                       v_data (i).col3,
                       'Y'
                  FROM DUAL
                 WHERE NOT EXISTS
                              (SELECT 1
                                 FROM xxc_table1 a
                                WHERE col1=col1
          .....
         );
    
    
           --exit when c_1%notfound;
       END LOOP;
       CLOSE c_1;
       commit;
    END;
    
    
    
    
    
    
    
    

    I get 40/28-PLS-00201: identifier 'I' must be declared what the problem in the above code please help me and I have lakhs of data

    Thank you

    Post edited by: Rajesh123 I changed IDX

    Post edited by: Rajesh123 changed t_type c_1 in Fetch

    But by using a SET of INSERT to insert into two tables at once in the same query would do the job without any collection of bulk of PL and avoid to query two times too.

    for example, as a single INSERT...

    SQL > create table table1 as
    2. Select 1 as col1, col2 of 1, 1 as col3, 1 as col4 Union double all the
    3 select 2,2,2,2 of all the double union
    4 Select 3,3,3,3 Union double all the
    5 Select 4,4,4,4 of all the double union
    6 select 5,5,5,5 of all the double union
    7 select 6,6,6,6 of all the double union
    8 select 7,7,7,7 of all the double union
    9 select 8,8,8,8 of all the double union
    10. Select 9,9,9,9 to the Union double all the
    11. Select double 10,10,10,10
    12.

    Table created.

    SQL > create table xxc_table like
    2. Select 1 as col1, col3 2, 3 as col4 Union double all the
    3. Select the 3, 4, 5 Union double all the
    4. Select the 5, 6, 7 double
    5.

    Table created.

    SQL > create table xxc_table1 like
    2. Select 3 as col1, col2, col3, 5 4 "n" as col4 Union double all the
    3. Select the 6, 7, 8, double "n"
    4.

    Table created.

    SQL > insert all
    2 when the xt_insert is null then
    3 in xxc_table (col1, col3, col4)
    4 values (col1, col3, col4)
    5 when the xt1_insert is null then
    6 in xxc_table1 (col1, col2, col3, col4)
    7 values (col1, col2, col3, 'Y')
    8. Select t1.col1 t1.col2, t1.col3, t1.col4
    9, xt.col1 as xt_insert
    10, xt1.col1 as xt1_insert
    11 from table1 t1
    12 left join external xxc_table xt (t1.col1 = xt.col1)
    13 left xt1 xxc_table1 outer join (t1.col1 = xt1.col1)
    14.

    15 rows created.

    SQL > select * from xxc_table by 1.
    COL1 COL3 COL4
    ---------- ---------- ----------
    1          2          3
    2          2          2
    3          4          5
    4          4          4
    5          6          7
    6          6          6
    7          7          7
    8          8          8
    9          9          9
    10-10-10

    10 selected lines.

    SQL > select * from xxc_table1 by 1.

    COL1 COL2 COL3 C
    ---------- ---------- ---------- -
    1          1          1 Y
    2          2          2 Y
    3          4          5 N
    4          4          4 Y
    5          5          5 Y
    6          7          8 N
    7          7          7 Y
    8          8          8 Y
    9          9          9 Y
    10-10-10

    10 selected lines.

    SQL >

  • Bulk collect / forall type what collection?

    Hi I am trying to speed up the query below using bulk collect / forall:

    SELECT h.cust_order_no AS custord, l.shipment_set AS Tess
    Info.tlp_out_messaging_hdr h, info.tlp_out_messaging_lin l
    WHERE h.message_id = l.message_id
    AND h.contract = '12384'
    AND l.shipment_set IS NOT NULL
    AND h.cust_order_no IS NOT NULL
    H.cust_order_no GROUP, l.shipment_set

    I would like to get the 2 selected fields above in a new table as quickly as possible, but I'm pretty new to Oracle and I find it hard to sort out the best way to do it. The query below is not working (no doubt there are many issues), but I hope that's sufficiently developed, shows the sort of thing, I am trying to achieve:

    DECLARE
    TYPE xcustord IS TABLE OF THE info.tlp_out_messaging_hdr.cust_order_no%TYPE;
    TYPE xsset IS TABLE OF THE info.tlp_out_messaging_lin.shipment_set%TYPE;
    TYPE xarray IS the TABLE OF tp_a1_tab % rowtype INDEX DIRECTORY.
    v_xarray xarray;
    v_xcustord xcustord;
    v_xsset xsset;
    CUR CURSOR IS
    SELECT h.cust_order_no AS custord, l.shipment_set AS Tess
    Info.tlp_out_messaging_hdr h, info.tlp_out_messaging_lin l
    WHERE h.message_id = l.message_id
    AND h.contract = '1111'
    AND l.shipment_set IS NOT NULL
    AND h.cust_order_no IS NOT NULL;
    BEGIN
    Heart OPEN;
    LOOP
    News FETCH
    LOOSE COLLECTION v_xarray LIMIT 10000;
    WHEN v_xcustord EXIT. COUNT() = 0;
    FORALL I IN 1... v_xarray. COUNTY
    INSERT INTO TP_A1_TAB (cust_order_no, shipment_set)
    VALUES (v_xarray (i) .cust_order_no, v_xarray (i) .shipment_set);
    commit;
    END LOOP;
    CLOSE cur;
    END;

    I'm running on Oracle 9i release 2.

    Short-term solution may be to a world point of view. Pay once per hour for the slow and complex query execution. Materialize the results in a table (with clues in support of queries on the materialized view).

    Good solution - analysis logic and SQL, determine what he does, how he does it and then figure out how this can be improved.

    Ripping separate cursors in SQL and PL/SQL code injection to stick together, are a great way to make performance even worse.

  • Bulk collect with sequence Nextval

    Hello


    Oracle Database 10 g Enterprise Edition Release 10.2.0.4.0 - 64 bit


    Had a doubt about the collection in bulk with nextval sequence. You need to update a table with a sequence of Nextval.

    where should I place select below in the proc is this before or after the loop


    < font color = "red" > SELECT prop_id_s.nextval INTO v_prop_id FROM dual; < / make >
    CREATE OR REPLACE PROCEDURE  (state IN varchar2)
    AS
       
       CURSOR get_all
       IS
          SELECT                              /*+ parallel (A, 8) */
                A .ROWID
                from Loads A WHERE A.Prop_id IS NULL;
    
       TYPE b_ROWID
       IS
          TABLE OF ROWID
             INDEX BY BINARY_INTEGER;
    
       
       lns_rowid          b_ROWID;
    BEGIN
    
    
       OPEN Get_all;
    
       LOOP
          FETCH get_all BULK COLLECT INTO   lns_rowid LIMIT 10000;
    
          FORALL I IN 1 .. lns_rowid.COUNT
             UPDATE   loads a
                SET   a.prop_id= v_prop_id (I)
              WHERE   A.ROWID = lns_rowid (I) AND a.prop_id IS NULL;
    
          
          COMMIT;
          EXIT WHEN get_all%NOTFOUND;
       END LOOP;
    
       
    
       CLOSE Get_all;
    END;
    /
    Published by: 960736 on January 23, 2013 12:51

    Hello

    It depends on what results you want. All updated rows would take the same value, or should all get unique values?

    Whatever it is, you don't need the sliders and loop. Just a simple UPDATE statement.

    If each line requires a unique value of the sequence, then

    UPDATE  loads
    SET     prop_id     = prod_id_s.NEXTVAL
    WHERE     prop_id     IS NULL
    ;
    

    If all the lines that have a need to the same value null, then:

    SELECT     prod_id_s.nextval
    INTO     v_prop_id
    FROM     dual;
    
    UPDATE  loads
    SET     prop_id     = v_prop_id
    WHERE     prop_id     IS NULL
    ;
    

    Don't forget to declare v_prop_id as a NUMBER.

    I hope that answers your question.
    If not, post a small example of data (instructions CREATE and INSERT, only relevant columns) for all the tables and the involved sequences and also publish outcomes from these data.
    If you ask on a DML statement, such as UPDATE, the sample data will be the content of the or the tables before the DML, and the results will be the State of the or the tables changed when it's all over.
    Explain, using specific examples, how you get these results from these data.
    Always say what version of Oracle you are using (for example, 11.2.0.2.0).
    See the FAQ forum {message identifier: = 9360002}

  • Doubt on bulk collect with LIMIT

    Hello

    I have a doubt on in bulk to collect, when did Commit

    I have an example in PSOUG
    http://psoug.org/reference/array_processing.html
    CREATE TABLE servers2 AS
    SELECT *
    FROM servers
    WHERE 1=2;
    
    DECLARE
     CURSOR s_cur IS
     SELECT *
     FROM servers;
    
     TYPE fetch_array IS TABLE OF s_cur%ROWTYPE;
     s_array fetch_array;
    BEGIN
      OPEN s_cur;
      LOOP
        FETCH s_cur BULK COLLECT INTO s_array LIMIT 1000;
    
        FORALL i IN 1..s_array.COUNT
        INSERT INTO servers2 VALUES s_array(i);
    
        EXIT WHEN s_cur%NOTFOUND;
      END LOOP;
      CLOSE s_cur;
      COMMIT;
    END;
    If my table servers were 3 000 000 files, when do we commit? When I insert all records?
    could crash redo log?
    using 9.2.08

    muttleychess wrote:
    If my table servers were 3 000 000 files, when do we commit?

    Commit point has nothing to do with how many rows treat you. It's purely commercial leads. Your code implements a commercial operation, right? So if you're getting into before any trancaction (from the commercial point of view) other sessions will be already see changes that are (from a business point of view) incomplete. In addition, what happens if rest of trancaction (from the commercial point of view) goes down?

    SY.

  • How to use Bulk collect in dynamic SQL with the example below:

    My Question is

    Using of dynamic SQL with collection in bulkif we pass the name of the table as "to the parameter' function, I want to display those

    An array of column names without vowels (replace the vowels by spaces or remove vowels and display).

    Please explain for example.

    Thank you!!

    It's just a predefined type

    SQL> desc sys.OdciVarchar2List
     sys.OdciVarchar2List VARRAY(32767) OF VARCHAR2(4000)
    

    You can just as easily declare your own collection type (and you are probably better served declaring your own type of readability if nothing else)

    SQL> ed
    Wrote file afiedt.buf
    
      1  CREATE OR REPLACE
      2     PROCEDURE TBL_COLS_NO_VOWELS(
      3                                  p_owner VARCHAR2,
      4                                  p_tbl   VARCHAR2
      5                                 )
      6  IS
      7     TYPE vc2_tbl IS TABLE OF varchar2(4000);
      8     v_col_list vc2_tbl ;
      9  BEGIN
     10      EXECUTE IMMEDIATE 'SELECT COLUMN_NAME FROM DBA_TAB_COLUMNS WHERE OWNER = :1 AND TABLE_NAME = :2 ORDER BY COLUMN_ID'
     11         BULK COLLECT
     12         INTO v_col_list
     13        USING p_owner,
     14              p_tbl;
     15      FOR v_i IN 1..v_col_list.COUNT LOOP
     16        DBMS_OUTPUT.PUT_LINE(TRANSLATE(v_col_list(v_i),'1AEIOU','1'));
     17      END LOOP;
     18*  END;
    SQL> /
    
    Procedure created.
    
    SQL> exec tbl_cols_no_vowels( 'SCOTT', 'EMP' );
    MPN
    NM
    JB
    MGR
    HRDT
    SL
    CMM
    DPTN
    
    PL/SQL procedure successfully completed.
    

    Justin

Maybe you are looking for

  • Zen touch 2 bluetooth connection problem

    Hi, I just bought a zen touch 2 mp3 player. I'm trying to connect it to my car through bloetooth audio system. It shows as "paired but not connected" and all my efforts to connect will not work. Does anyone know of a solution to this?

  • Lenovo max resolution: just sad

    Why the Lenovo moved with these billboards of cheap consumer, even on their machines of the range, while the technology for better billboards is obviously there? I am the only one to think that this is not enough? A comparison: New IPad: $500 2048 x

  • Connection failure when you try to download the executable file for playbook

    Hi all. I don't know what I'm doing wrong here. I imported the project example of falling blocks Momentics (2.1.0 on XP SP3) and downloaded a debugging token. Autodiscover found the Playbook (OS 2.1.0.1088) connected via USB. In the Project Explorer,

  • C4585 stopped print wirelesly All in one printr

    Has been connected to my lap top via BT Home hub wireless link and work. Arrested and all the attempts of correction have failed, unit has taken over and replaced yesterday. Nations United installed all previuous software and tried to install. Wirele

  • BSOD Less_Than_or_ = love

    HI, I have also the same problem. I wish you could help me. :) I did what you said and here's my dump files. Please, I beg you. I need your help. Thank you. :) https://onedrive.live.com/redir?RESID=4B5ED5FA0AE205CD%21116 https://onedrive.live.com/red