SELECT from Bulk INSERT - Performance Clarification

I have 2 tables-emp_new & emp_old. I need to load all the data from emp_old to emp_new. Is there a transaction_id column in emp_new whose value should be extracted from a main_transaction table that includes a column of region Code. Something like -

TRANSACTION_ID REGION_CODE
------------------------- ------------
100. WE
AMER 101
APAC 102

My bulk insert query looks like this-

INSERT INTO emp_new
(col1,
col2,
...,
...,
...,
transaction_id)
SELECT
col1,
col2,
...,
...,
...,
* (Select transaction_id from main_transaction WHERE region_code = 'US') *.
Of emp_old

There are millions of rows that need to be loaded this way. I would like to know if the Subselection to fetch the transaction_id would be re-executed for each line, which would be very expensive and I'm actually looking for a way to avoid this. The main_transcation table is pre-loaded and its values will not change. Is there a way (via some SUSPICION) to indicate that the subselect should not re-run for each line?

On a different note, the implementation plan of the whole above INSERT looks like-

--------------------------------------------------------------------------
| ID | Operation | Name | Lines | Bytes | Cost (% CPU).
--------------------------------------------------------------------------
| 0 | INSERT STATEMENT. 11 M | 54 M | 6124 (4) |
| 1. FULL RESTRICTED INDEX SCAN FAST | EMPO_IE2_IDX | 11 M | 54 M | 6124 (4) |
--------------------------------------------------------------------------
EMPO_IE2_IDX-> Index on emp_old

I'm surprised to see that the main_transaction of the table is not in the execution plan at all. Does this mean that the subselect is not executed for each line? However, at least for the first reading, I suppose that the table must appear in the plan.

Can someone help me understand this?

Why the explain command plan includes no information about the table of main_transaction
Can someone please clarify?

As I said originally (and repeated in a later post) - probably because PLAN_TABLE is an older version.
More recent versions of PLAN_TABLE are required to correctly report "most recent" functions implementation plans.

Tags: Database

Similar Questions

  • Problem in «Insert into...» "Select * from tablename" query

    Hello
    I am inserting a million record from one table to another table using "insert in...". Select * from 'method. During this insertion, if one of the recording have insertion any problem then obiviously, be rollbacked, - not only that, to find the wrong people, the recording will be a tedious job.

    is it possible to find which record having problems during such an insertion? and is it possible to validate the transaction?

    Note: I know that we can use the collection method in bulk to solve this problem, but my situation, I have to use "insert..." Select * from 'method only.

    Please guide me if anyone have dealt with this problem before.

    Thank you
    Kalanidhi

    You can try the ERROR LOG IN the clause of the INSERT statement.

  • Insert into MDQ_OLD select * from table (lt_monitorMdq);

    I'm trying to insert into a table that has only a single column, which is a column of a user defined type (UDT). The UDT is nested, that is one of the attributes of the UDT is an another UDT.

    I aim to insert into the table like this pseudo-code:

    INSERT INTO T1 SELECT * FROM THE UDT;

    CREATE TABLE MDQ_OLD (myMDQ UDT_T_MONITOR_MDQ)

    NESTED TABLE myMDQ

    (T1_NEW) ACE STORE

    THE NESTED TABLE MONITOR_MDQ_PRIM_RIGHTS

    STORE AS T2_NEW);

    The MONITOR_MDQ_CLI procedure. Read below returns the parameter lt_monitorMdq which is a UDT type as announced. The statement "insert into select MDQ_OLD * table (lt_monitorMdq);" fails, while the second insert statement works.

    Is it possible to get the first statement of work?

    I'm on Oracle 11 g 2.

    DECLARE

    lt_monitorMdq UDT_T_MONITOR_MDQ;

    BEGIN

    MONITOR_MDQ_CLI. Reading (TRUNC (SYSDATE),

    TRUNC (SYSDATE),

    NULL,

    NULL,

    "MILLION BTU.

    lt_monitorMdq); -Note lt_monitorMdq is an OUT parameter

    -This insert does not work

    Insert into MDQ_OLD select * from table (lt_monitorMdq);

    BECAUSE me in 1... lt_monitorMdq.count

    LOOP

    Dbms_output.put_line ('lt_monitorMdq: ' | .mdq_id lt_monitorMdq (i));

    -This integration works

    INSERT INTO MDQ_OLD (MYMDQ)

    VALUES (UDT_T_MONITOR_MDQ (UDT_R_MONITOR_MDQ)

    lt_monitorMdq (i) .gasday,

    1,

    NULL,

    NULL,

    NULL,

    NULL,

    NULL,

    NULL,

    NULL,

    NULL,

    NULL,

    NULL,

    NULL,

    NULL,

    NULL,

    NULL,

    () UDT_T_MONITOR_MDQ_PRIM_RIGHT

    () UDT_R_MONITOR_MDQ_PRIM_RIGHT

    1,

    NULL,

    NULL,

    NULL,

    NULL,

    NULL,

    NULL,

    NULL,

    (NULL)));

    END LOOP;

    END;

    have you tried:

    INSERT INTO MDQ_OLD (myMDQ) VALUES (lt_MonditorMDq);

    curiosity:

    Is there a particular reason, why you have created a table with a single column of type UDT instead of:

    CREATE TABLE... OF UDT_T_MONITOR_MDQ;

    I can tell you from experience that using a nested table, you can easily query the data in the nested table.

    MK

  • Performance issue Bulk Insert PL/SQL table type

    Hi all

    I put in work of a batch to fill a table with a large number of data records(>3,000,000). To reduce the execution time, I used PL/SQL tables to temporarily store data that must be written to the destination table. Once all documents are piling up in the PL/SQL table I use a FORALL operator for bulk insert the records in the physical table.

    Currently, I follow two approaches to implement the process described above. (Please see the code segments below). I need to choose how to best wise performance between these two approaches. I really appreciate all the comments of experts about the runtime of the two approaches.

    (I don't see much difference in consumption of time in my test environment that has limited the data series. This process involves building a complex set of structures of large product once deployed in the production environment).


    Approach I:_
    DECLARE
    TYPE of test_type IS test_tab % ROWTYPE directory INDEX TABLE;
    test_type_ test_type.
    ins_idx_ NUMBER;
    BEGIN
    ins_idx_: = 1;
    NESTED LOOPS
    test_type_ (ins_idx_) .column1: = value1;
    test_type_ (ins_idx_) .column2: = value2;
    test_type_ (ins_idx_) .column3: = value3;
    ins_idx_: = ins_idx_ + 1;
    END LOOP;

    I_ FORALL in 1.test_type_. COUNTY
    INSERT INTO test_tab VALUES (i_) test_type_;
    END;
    /


    Approach II:_
    DECLARE
    Column1 IS a TABLE OF TYPE test_tab.column1%TYPE INDEX DIRECTORY.
    Column2 IS a TABLE OF TYPE test_tab.column2%TYPE INDEX DIRECTORY.
    Column3 IS a TABLE OF TYPE test_tab.column3%TYPE INDEX DIRECTORY.
    column1 column1_;
    column2_ Column2;
    column3_ Column3;
    ins_idx_ NUMBER;
    BEGIN
    ins_idx_: = 1;
    NESTED LOOPS
    column1_ (ins_idx_): = value1;
    column2_ (ins_idx_): = value2;
    column3_ (ins_idx_): = value3;
    ins_idx_: = ins_idx_ + 1;
    END LOOP;

    FORALL idx_ in 1.column1_. COUNTY
    INSERT
    IN n_part_cost_bucket_tab)
    Column1,
    Column2,
    Column3)
    VALUES)
    column1_ (idx_),
    column2_ (idx_),
    column3_ (idx_));
    END;
    /

    Best regards
    Lorenzo

    Published by: nipuna86 on January 3, 2013 22:23

    nipuna86 wrote:

    I put in work of a batch to fill a table with a large number of data records(>3,000,000). To reduce the execution time, I used PL/SQL tables to temporarily store data that must be written to the destination table. Once all documents are piling up in the PL/SQL table I use a FORALL operator for bulk insert the records in the physical table.

    Performance is more than just reducing the execution time.

    Just as smashing a car stops more than a car in the fastest possible time.

    If it was (breaking a car stopping all simply), then a brick with reinforced concrete wall construction, would have been the perfect way to stop all types of all sorts of speed motor vehicles.

    Only problem (well more than one actually) is that stop a vehicle in this way is bad for the car, the engine, the driver, passengers and any other content inside.

    And pushing 3 million records in a PL/SQL 'table' (btw, that is a WRONG terminology - there no PL/SQL table structure) in order to run a SQL cursor INSERT 3 million times, to reduce the execution times, is no different than using a brick wall to stop a car.

    Both approaches are pretty well imperfect. Both places an unreasonable demand on the memory of the PGA. Both are still row-by-row (aka slow-by-slow) treatment.

  • Insert into select * from table 3. -is on the agenda by the needless?

    I've got an example of script, it will work for any table, so I don't bother with the full ddl.

    ----------------------------------------------------------------------------
    create table test in select * from hotels where 1 = 2;

    Insert into test select * from Hotels by city;

    Select the city from the trial;

    -drop table test is serving;
    -----------------------------------------------------------------------------

    The amazing thing is, that the city is ordered alphabetically,
    but you would say it is that an operation order is irrelevant.

    Any ideas on that?

    This will still work?

    Edited by: FourEyes on December 8, 2008 22:55

    Edited by: FourEyes on 8 December 2008 22:56

    Edited by: FourEyes on 8 December 2008 22:56

    Hello

    The [SQL Oracle 10 language reference | http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_9014.htm#sthref9371] manual says:

    «In regards to the ORDER BY clause from the subquery in the DML_table_expression_clause, you place your order is guaranteed only for the inserted rows and only within each extension table.» Orders new lines with regard to existing lines is not guaranteed. »

  • Bulk insert in an external table

    Hi, I get the error ora-29913, ora-01410 trying to do a bulk insert of external table

    INSERT

    IN CORE_TB_LOG

    (SELECT 'MODEL', 'ARCH_BH_MODEL', ROWID, "MODEL-D-000000001', - 45, 'A', SYSDATE, 'R'")

    OF ARCH_BH_MODEL1

    WHERE length (MOD_XCODIGO) > 10)

    INSERT

    *

    ERROR on line 1:

    ORA-29913: error in executing ODCIEXTTABLEFETCH legend

    ORA-01410: invalid ROWID

    ARCH_BH_MODEL1 is the external table.


    What's wrong?


    Thank you.

    Hello

    There is no ROWID in external tables.

    It makes sense: ROWID identifies where a line is stored in the database; It shows the data file and the block number in this file.

    External tables are not stored in the database.  They exist independently of any data file in the database.  The concept of an Oracle block does not apply to them.

    Why would you copy the ROWID, even if you could?

    Apart from ROWID and SYSDATE, you select only literals.  You don't want to select all the real data of the external table?

    What is the big picture here?  Post a small example of data (a CREATE TABLE statement and a small data file for the external table) and the desired results from these sample data (in other words, what core_tb_log must contain after INSERTION is complete.)  Explain how you get these results from data provided.

    Check out the Forum FAQ: Re: 2. How can I ask a question on the forums?

  • Bulk Insert to help for all is slow

    HII All,
    I have two sql scripts. Having just insert statements and the other using bulk insert both do the same thing.

    (1) using the Bulk Insert
    Set serveroutput on;
    
    Declare
              type t_hgn_no is table of r_dummy_1%rowtype
              index by binary_integer;
              type t_flx_no is table of varchar2(4000)
              index by binary_integer;
              
              l_hgn_no t_hgn_no;
              l_flx_no t_flx_no;
    
              begin_time number;
              end_time   number;
    
              
    Begin
         select (dbms_utility.get_time) into begin_time from dual;
         dbms_output.put_line('started at : '||begin_time);
         
         
         With t as
         (
         Select '100004501' HOGAN_REFERENCE_NUMBER , '320IVLA092811011' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100014501' HOGAN_REFERENCE_NUMBER , '320IVLA092811010' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100024501' HOGAN_REFERENCE_NUMBER , '320IVLA092811009' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100034501' HOGAN_REFERENCE_NUMBER , '320IVLA092811008' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100044501' HOGAN_REFERENCE_NUMBER , '320IVLA092811007' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '10006501' HOGAN_REFERENCE_NUMBER , '140IGL2092811951' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100074501' HOGAN_REFERENCE_NUMBER , '320IVLA092811006' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '10007501' HOGAN_REFERENCE_NUMBER , '200IVLA092810617' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100084501' HOGAN_REFERENCE_NUMBER , '320SVLA092810002' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100094501' HOGAN_REFERENCE_NUMBER , '320IVLA092811005' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100104501' HOGAN_REFERENCE_NUMBER , '320IVLA092811004' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100114501' HOGAN_REFERENCE_NUMBER , '320IVLA092811003' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100124501' HOGAN_REFERENCE_NUMBER , '320IVLA092811002' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100134501' HOGAN_REFERENCE_NUMBER , '320IVLA092811001' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100144501' HOGAN_REFERENCE_NUMBER , '320SVLA092810001' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '10016501' HOGAN_REFERENCE_NUMBER , '140IGL2092811950' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '10017501' HOGAN_REFERENCE_NUMBER , '200IVLA092810616' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '100217851' HOGAN_REFERENCE_NUMBER , '520USDL092818459' FLEXCUBE_REFERENCE_NUMBER from dual 
         union all
         Select '1002501' HOGAN_REFERENCE_NUMBER , '100PVL2092813320' FLEXCUBE_REFERENCE_NUMBER from dual 
              )
         Select HOGAN_REFERENCE_NUMBER,FLEXCUBE_REFERENCE_NUMBER
         bulk collect into l_hgn_no
         from t;
    
         forall i in 1..l_hgn_no.count
         
         Insert into r_dummy_1 values l_hgn_no(i);
    
         
    
    
    
    
    
    
         
         Commit;
         select (dbms_utility.get_time) into end_time from dual;
         dbms_output.put_line('ended at : '||end_time);
    
         
         
    Exception
              When others then
                   dbms_output.put_line('Exception : '||sqlerrm);
                   rollback;
    End;
    /
    Duration for bulk collect
    ==================


    SQL > @d:/bulk_insert.sql.
    starts at: 1084934013
    has completed at: 1084972317

    PL/SQL procedure successfully completed.

    SQL > select 1084972317-1084934013 double;

    1084972317 1084934013
    ---------------------
    38304




    (2) using the Insert statement
    Declare
              begin_time number;
              end_time   number;
    
    Begin
                        select (dbms_utility.get_time) into begin_time from dual;
                        dbms_output.put_line('started at : '||begin_time);
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('36501', '100CFL3092811385');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('106501', '100CFL3092811108');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('172501', '100SFL1092810013');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('192501', '100SVL2092814600');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('212501', '100SVL2092814181');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('272501', '100AFL309281B2LZ');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('292501', '100AVL2092812200');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('332501', '100SVL2092814599');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('346501', '100AFL309281B2LY');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('372501', '100SVL2092814598');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('382501', '100IVL1092811512');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('422501', '100SFL1092810020');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('432501', '100IVL1092811447');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('462501', '100CFL3092811107');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('492501', '100SVL2092814245');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('542501', '100AVL2092812530');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('592501', '100CFL3092811105');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('612501', '100SVL2092814242');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('632501', '100CFL3092811384');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('712501', '100PVL2092813321');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('722501', '100PVL2092813311');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('732501', '100PVL2092813341');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('742501', '100PVL2092813319');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('752501', '100PVL2092813308');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('762501', '100PVL2092813338');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('772501', '100PVL2092813316');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('782501', '100PVL2092813305');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('786501', '100CFL2092810051');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('792501', '100PVL2092813335');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('802501', '100PVL2092813313');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('812501', '100PVL2092813302');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('822501', '100PVL2092813332');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('832501', '100PVL2092813310');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('852501', '100PVL2092813329');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('862501', '100PVL2092813307');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('872501', '100PVL2092813299');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('882501', '100PVL2092813326');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('922501', '100PVL2092813304');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('932501', '100PVL2092813296');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('952501', '100PVL2092813300');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('962501', '100PVL2092813293');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('972501', '100PVL2092813323');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('982501', '100PVL2092813297');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1002501', '100PVL2092813320');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1012501', '100PVL2092813294');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1022501', '100PVL2092813290');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1032501', '100PVL2092813317');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1042501', '100PVL2092813291');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1052501', '100PVL2092813287');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1062501', '100PVL2092813315');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1072501', '100PVL2092813288');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1082501', '100AFL309281B2LX');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1092501', '100PVL2092813312');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1102501', '100PVL2092813285');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1112501', '100PVL2092813284');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1122501', '100PVL2092813309');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1142501', '100PVL2092813281');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1152501', '100PVL2092813306');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1162501', '100PVL2092813282');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1166501', '100CFL3092811383');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1212501', '100IVL1092811445');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1232501', '100IVL1092811526');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1272501', '100IVL1092811441');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1292501', '100IVL1092811523');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1302501', '100PVL2092813303');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1312501', '100PVL2092813279');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1322501', '100PVL2092813278');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1332501', '100PVL2092813301');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1342501', '100PVL2092813276');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1352501', '100PVL2092813275');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1376501', '100AFL309281B2LW');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1382501', '100PVL2092813272');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1392501', '100PVL2092813298');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1402501', '100PVL2092813273');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1412501', '100PVL2092813269');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1446501', '100RNF6092810019');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1452501', '100IVL1092811436');
    
                        insert into r_dummy (HOGAN_REFERENCE_NUMBER, FLEXCUBE_REFERENCE_NUMBER)
                        values ('1492501', '100CFL3092811382');
    
         
    
                        
                        select (dbms_utility.get_time) into end_time from dual;
                        dbms_output.put_line('ended at : '||end_time);
    Exception
              When Others Then
                         dbms_output.put_line('Exception Occured '||sqlerrm);
                         rollback;
    End;
    /
    duration for the insert script
    ====================
    SQL> @@d:/insert_bhanu.sql
    started at : 1084984928
    ended at : 1084988401
    
    PL/SQL procedure successfully completed.
    
    SQL> select 1084988401 - 1084984928 from dual;
    
    1084988401-1084984928
    ---------------------
                     3473
    I waz impossible of past all of the code... He has almost 13851 records. Please suggest me the best option, and if there is another way to achieve this.
    I need to provide a solution optimized to my clients.

    Concerning
    Rambeau.

    792353 wrote:

    I have two sql scripts. Having just insert statements and the other using bulk insert both do the same thing.

    Not really valid for the purposes of comparison.

    The fundamental question is what makes in bulk for faster processing. It is a well-known and easily answered--reduction in context switches between SQL and PL/SQL engines. And that's all. Nothing more and nothing magical.

    The easiest way to show this difference is to eliminate all other factors - especially I/O as a trial may be at a disadvantage by the physical i/o, while the comparison test can be promoted by e/s logic. Another factor that must be eliminated is extra unnecessary SQL that adds more overhead (such as the use of DOUBLE and unions) and so on.

    Remember that it is critical for the reference driver to compare like with like and eliminate all other factors. As simplistic as maybe keep it. For example, something like the following:

    SQL> create table foo_table( id number );
    
    Table created.
    
    SQL> var iterations number
    SQL> exec :iterations := 10000;
    
    PL/SQL procedure successfully completed.
    
    SQL>
    SQL> declare
      2          t1      timestamp with time zone;
      3  begin
      4          dbms_output.put_line( 'bench A: hard parsing, normal delete' );
      5          t1 := systimestamp;
      6          for i in 1..:iterations
      7          loop
      8                  execute immediate 'delete from foo_table where id = '||i;
      9          end loop;
     10          dbms_output.put_line( systimestamp - t1 );
     11  end;
     12  /
    bench A: hard parsing, normal delete
    +000000000 00:00:07.639779000
    
    PL/SQL procedure successfully completed.
    
    SQL>
    SQL> declare
      2          t1      timestamp with time zone;
      3  begin
      4          dbms_output.put_line( 'bench B: soft parsing, normal delete' );
      5          t1 := systimestamp;
      6          for i in 1..:iterations
      7          loop
      8                  delete from foo_table where id = i;
      9          end loop;
     10          dbms_output.put_line( systimestamp - t1 );
     11  end;
     12  /
    bench B: soft parsing, normal delete
    +000000000 00:00:00.268915000
    
    PL/SQL procedure successfully completed.
    
    SQL>
    SQL> declare
      2          type TNumbers is table of number;
      3          t1      timestamp with time zone;
      4          idList  TNumbers;
      5  begin
      6          dbms_output.put_line( 'bench C: soft parsing, bulk delete' );
      7          idList := new TNumbers();
      8          idList.Extend( :iterations );
      9
     10          for i in 1..:iterations
     11          loop
     12                  idList(i) := i;
     13          end loop;
     14
     15          t1 := systimestamp;
     16          forall i in 1..:iterations
     17                  delete from foo_table where id = idList(i);
     18          dbms_output.put_line( systimestamp - t1 );
     19  end;
     20  /
    bench C: soft parsing, bulk delete
    +000000000 00:00:00.061639000
    
    PL/SQL procedure successfully completed.
    
    SQL> 
    

    Why an empty table? Eliminates potential problems with the physical versus logical I/O. Why a delete and not Insertstatement? The same reason.

    The foregoing shows how slow hard analysis is clear. It has the advantage of soft analysis. It has the advantage of reducing the change of context, at the expense of additional memory - as bench 3 consumes much more memory (PGA) that other points of reference.

    Also note that these marks the exact same delete SQL statements - using different approaches. Where the approach only (and not the SQL) made the difference. This means that we can draw valid conclusions on each of these approaches.

    Can we say the same thing with the scripts you used to "show" that the treatment bulk is apparently slow?

  • Bulk insert table

    Hello
    Version 10g

    I need to get an array of 100 recordings in pl/sql procedure.
    The table is built on columns.
    Then I need bulk insert into the table.

    Question: Is it possible to get a table in the stored procedure?

    Thank you

    Yes you can get it. Check the code below.

    SQL> create type trec is object (a number, b number);
      2  /
    
    Type created.
    
    SQL> create type tlist is table of trec;
      2  /
    
    Type created.
    
    SQL> create table coltab(col1 number, col2 number, entry_date date);
    
    Table created.
    
    SQL> ed
    Wrote file afiedt.buf
    
      1  create or replace procedure arraytest (p tlist) is
      2  begin
      3  for i in p.first..p.last
      4  loop
      5  insert into coltab values
      6  (p(i).a,p(i).b,sysdate);
      7  end loop;
      8* end;
    SQL> /
    
    Procedure created.
    
    -----------Testing--------------------
    SQL> declare
      2  l tlist := tlist(trec(1,2),trec(4,3));
      3  begin
      4  arraytest(l);
      5  end;
      6  /
    
    PL/SQL procedure successfully completed.
    
    SQL>
    SQL>
    SQL> select * from coltab;
    
          COL1       COL2 ENTRY_DAT
    ---------- ---------- ---------
             1          2 04-AUG-10
             4          3 04-AUG-10
    

    If you do not use the concept of line, try this. This works in a similar way.

     create or replace procedure arraytest (p tlist) is
     begin
     insert into coltab
     select t1.*, sysdate from table(p) t1;
    end;
    /
    
  • Error: SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA 'Win32_Processor' AND TargetInstance.LoadPercentage

    Original title: windows Vista SP2 is not start properly. Very, very slow and do not have access to open anything.

    I was able to boot into safe mode and view the event viewer, and that's what he said:

    ./root/CIMV2

        SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA 'Win32_Processor' AND TargetInstance.LoadPercentage > 99
        0 x 80041003

    Hi Noomcy,

    1. don't you make changes before the show?

    This problem occurs if the WMI filter is accessible without the necessary permissions. To resolve this problem, we recommend that you run the script inthis article and check the result.

    See also Optimize Windows Vista for better performance

    Visit our Microsoft answers feedback Forum and let us know what you think.

  • Lines in bulk insert in the table of the adf.

    Hello

    My requirement is to insert rows selected from a VO in a new OT at the same time.

    Can anyone guide me on this please?

    Thanks in advance.

    Not really sure what you are after.

    If you say that you have the details in a readOnly VO and now you must insert in a based EO VO, then Yes, you must go through the lines.

    You can refer to the blog below which speaks the lines of cloning.

    https://blogs.Oracle.com/vijaymohan/entry/cloning_a_view_object_rows_adfbc_way

    It will be useful.

    See you soon

    AJ

  • Excerpt from table (insert the provided table)

    I would like to get the ID with one of the specific criteria
    1 iDs who ONLY have LOST STATUS and TAKEN_DTE as NULL in the YEAR 2010/2011
    The ID peut have status as TAKEN previous to the YEAR 2010 but first must meet the above criteria.


    CREATE TABLE CHLOE_C)
    ID varchar (8),
    YEAR varchar (8),
    Varchar (8), the STATE
    TAKEN_DTE varchar (8),
    STATUS varchar (8));

    INSERT ALL
    IN chloe_c (ID, YEAR, STATE, TAKEN_DTE, STATUS) VALUES ('80730 ","2006 ", 602" ", March 13, 2006 ', ' TAKEN')
    IN chloe_c (ID, YEAR, STATE, TAKEN_DTE, STATUS) VALUES (' '80730', ' 2010 ', 705', 12 March 2010 ', "LOST")
    IN chloe_c (ID, YEAR, STATE, TAKEN_DTE, STATUS) VALUES (' '6321 ', '2012', ' 1203', 13 March 2012 ', ' LOST')
    IN chloe_c (ID, YEAR, STATE, TAKEN_DTE, STATUS) VALUES (' '6321 ', '2012', ' 1209', 13 March 2012 ', ' TAKEN')
    IN chloe_c (ID, YEAR, STATE, TAKEN_DTE, STATUS) VALUES ('75454 ', '2010', ' 1015', ","LOST")
    IN chloe_c (ID, YEAR, STATE, TAKEN_DTE, STATUS) VALUES (' '75454', ' 2011 ', 602', March 13, 2011 ', ' TAKEN')
    IN chloe_c (ID, YEAR, STATE, TAKEN_DTE, STATUS) VALUES ('100 ', '2010', 6102', ","LOST")
    IN chloe_c (ID, YEAR, STATE, TAKEN_DTE, STATUS) VALUES (' '100 ', '2009', 6152', March 13, 2009 ', ' TAKEN')
    IN chloe_c (ID, YEAR, STATE, TAKEN_DTE, STATUS) VALUES (' '100', ' 2006 ', 61152', March 13, 2006 ', ' LOST')
    IN chloe_c (ID, YEAR, STATE, TAKEN_DTE, STATUS) VALUES ('99 ', '2009', 5402', ","LOST")
    IN chloe_c (ID, YEAR, STATE, TAKEN_DTE, STATUS) VALUES (' '99 ', '2010', 6102', 13 March 2010 ', "LOST")
    IN chloe_c (ID, YEAR, STATE, TAKEN_DTE, STATUS) VALUES ('88 ', '2011', 6102', ","LOST")
    IN chloe_c (ID, YEAR, STATE, TAKEN_DTE, STATUS) VALUES ('88 ', '2010', 6102', ","LOST")
    IN chloe_c (ID, YEAR, STATE, TAKEN_DTE, STATUS) VALUES (' '88 ', '2009', 6102', 13 March 2010 ', "TAKEN")
    SELECT * FROM dual;

    THEN
    ID 88 YES because it saisfies the criteria it has NULL in TAKEN_DTE and STATUS as LOST, for the two years
    99 NO ID that she satisfied the STATE but it has a TAKEN_DTE year 2010
    ID 100 YES it has the STATUS as LOST and YEAR 2010 (STATUS TAKEN is prior 2010 so its ok)
    ID 6321 NO YEAR IS 2012 (DO YOU WANT ANYTHING FROM 2012)
    75454 ID number STATE is LOST for 2010 which is what we want, but she also TOOK in 2011 which we do not want
    80730 NO ID even if the STATUS is LOST TAKEN_DTE is not NULL
    ID 
    88
    100
    I tried something like this:
    Select distinct C.ID
    FROM chloe_c C
    WHERE  c.taken_dte IS NULL
    AND YEAR in ('2010','2011')
    AND ID not in (SELECT ID FROM chloe_c c WHERE c.status = 'Taken' and c.year in ('2011','2010'))
    but it also gives me ID 75454

    Hello

    That's what you asked for:

    SELECT    id
    FROM       chloe_c
    WHERE       year          IN ('2010', '2011')
    GROUP BY  id
    HAVING       COUNT ( CASE
                      WHEN  status     = 'LOST'
                    AND   taken_dte     IS NULL
                    THEN  'OK'
                  END
              ) = COUNT (*)
    ;
    

    Thnaks for the display of the sample data. don't forget why do you: that people who want to help you can recreate the problem and test their ideas. If you post code that does not work, then it is not so useful. You try to put 11 strings, for example, March 13, 2006 ", in a column of type VARCHAR (8)" Test your code before you post.
    Use VARCHAR2, VARCHAR not, for strings.
    Use DATE (or perhaps TIMESTAMP), no strings to the date information.

  • Not the rows returned by the spatial query wrapped in SELECT * FROM...

    Hello

    When you run a query with SDO_EQUAL sub, I get a very strange behavior. The SDO_EQUAL query on its own works very well, but if I wrap in SELECT * from there, I get no results. If I wrap SDO_ANYINTERACT in SELECT * from there, I get the expected result.

    It seems like the spatial index is used during the execution of the ordinary, but not when SDO_EQUAL request wrapped in SELECT * FROM. Weird. The spatial index is also not used when SDO_ANYINTERACT is wrapped in SELECT * FROM... so I don't know why that returns the correct answer.

    I get this problem on 11.2.0.2 on Red Hat Linux 64-bit and 11.2.0.1 on Windows XP 32-bit (i.e., all versions of 11g I've tried). The query works as expected on 10.2.0.5 on Windows Server 2003 64-bit.

    Any ideas?

    Confused in Dublin (John)

    Test case...
    SQL> 
    SQL> -- Create a table and insert the same geometry twice
    SQL> DROP TABLE sdo_equal_query_test;
    
    Table dropped.
    
    SQL> CREATE TABLE sdo_equal_query_test (
      2  id NUMBER,
      3  geometry SDO_GEOMETRY);
    
    Table created.
    
    SQL> 
    SQL> INSERT INTO sdo_equal_query_test VALUES (1,
      2  SDO_GEOMETRY(3003, 81989, NULL, SDO_ELEM_INFO_ARRAY(1, 1003, 1),
      3  SDO_ORDINATE_ARRAY(1057.39, 1048.23, 4, 1057.53, 1046.04, 4, 1057.67, 1043.94, 4, 1061.17, 1044.60, 5, 1060.95, 1046.49, 5, 1060.81, 1047.78, 5, 1057.39, 1048.23, 4)));
    
    1 row created.
    
    SQL> 
    SQL> INSERT INTO sdo_equal_query_test VALUES (2,
      2  SDO_GEOMETRY(3003, 81989, NULL, SDO_ELEM_INFO_ARRAY(1, 1003, 1),
      3  SDO_ORDINATE_ARRAY(1057.39, 1048.23, 4, 1057.53, 1046.04, 4, 1057.67, 1043.94, 4, 1061.17, 1044.60, 5, 1060.95, 1046.49, 5, 1060.81, 1047.78, 5, 1057.39, 1048.23, 4)));
    
    1 row created.
    
    SQL> 
    SQL> -- Setup metadata
    SQL> DELETE FROM user_sdo_geom_metadata WHERE table_name = 'SDO_EQUAL_QUERY_TEST';
    
    1 row deleted.
    
    SQL> INSERT INTO user_sdo_geom_metadata VALUES ('SDO_EQUAL_QUERY_TEST','GEOMETRY',
      2  SDO_DIM_ARRAY(SDO_DIM_ELEMENT('X', 0, 100000, .0001), SDO_DIM_ELEMENT('Y', 0, 100000, .0001), SDO_DIM_ELEMENT('Z', -100, 4000, .0001))
      3  ,81989);
    
    1 row created.
    
    SQL> 
    SQL> -- Create spatial index
    SQL> DROP INDEX sdo_equal_query_test_spind;
    DROP INDEX sdo_equal_query_test_spind
               *
    ERROR at line 1:
    ORA-01418: specified index does not exist
    
    
    SQL> CREATE INDEX sdo_equal_query_test_spind ON sdo_equal_query_test(geometry) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
    
    Index created.
    
    SQL> 
    SQL> -- Ensure data is valid
    SQL> SELECT sdo_geom.validate_geometry_with_context(sdo_cs.make_2d(geometry), 0.0001) is_valid
      2  FROM sdo_equal_query_test;
    
    IS_VALID
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    TRUE
    TRUE
    
    2 rows selected.
    
    SQL> 
    SQL> -- Check query results using sdo_equal
    SQL> SELECT b.id
      2  FROM sdo_equal_query_test a, sdo_equal_query_test b
      3  WHERE a.id = 1
      4  AND b.id != a.id
      5  AND sdo_equal(a.geometry, b.geometry) = 'TRUE';
    
            ID
    ----------
             2
    
    1 row selected.
    
    SQL> 
    SQL> -- Check query results using sdo_equal wrapped in SELECT * FROM
    SQL> -- Results should be the same as above, but... no rows selected
    SQL> SELECT * FROM (
      2       SELECT b.id
      3       FROM sdo_equal_query_test a, sdo_equal_query_test b
      4       WHERE a.id = 1
      5       AND b.id != a.id
      6       AND sdo_equal(a.geometry, b.geometry) = 'TRUE'
      7  );
    
    no rows selected
    
    SQL> 
    SQL> -- So that didn't work.  Now try sdo_anyinteract... this works ok
    SQL> SELECT * FROM (
      2       SELECT b.id
      3       FROM sdo_equal_query_test a, sdo_equal_query_test b
      4       WHERE a.id = 1
      5       AND b.id != a.id
      6       AND sdo_anyinteract(a.geometry, b.geometry) = 'TRUE'
      7  );
    
            ID
    ----------
             2
    
    1 row selected.
    
    SQL> 
    SQL> -- Now try a scalar query
    SQL> SELECT * FROM (
      2       SELECT b.id
      3       FROM sdo_equal_query_test a, sdo_equal_query_test b
      4       WHERE a.id = 1
      5       AND b.id != a.id
      6  );
    
            ID
    ----------
             2
    
    1 row selected.
    
    SQL> spool off
    Here is the plan of the explain output for the query that works. Note that the spatial index is used.
    SQL> EXPLAIN PLAN FOR
      2  SELECT b.id
      3  FROM sdo_equal_query_test a, sdo_equal_query_test b
      4  WHERE a.id = 1
      5  AND b.id != a.id
      6  AND sdo_equal(a.geometry, b.geometry) = 'TRUE';
    
    Explained.
    
    SQL> @?/rdbms/admin/utlxpls.sql
    
    PLAN_TABLE_OUTPUT
    ------------------------------------------------------------------------------------------------------------
    Plan hash value: 3529470109
    
    ------------------------------------------------------------------------------------------------------------
    | Id  | Operation                     | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
    ------------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT              |                            |     1 |  7684 |     3   (0)| 00:00:01 |
    |   1 |  RESULT CACHE                 | f5p63r46pbzty4sr45td1uv5g8 |       |       |            |       |
    |   2 |   NESTED LOOPS                |                            |     1 |  7684 |     3   (0)| 00:00:01 |
    |*  3 |    TABLE ACCESS FULL          | SDO_EQUAL_QUERY_TEST       |     1 |  3836 |     3   (0)| 00:00:01 |
    |*  4 |    TABLE ACCESS BY INDEX ROWID| SDO_EQUAL_QUERY_TEST       |     1 |  3848 |     3   (0)| 00:00:01 |
    |*  5 |     DOMAIN INDEX              | SDO_EQUAL_QUERY_TEST_SPIND |       |       |     0   (0)| 00:00:01 |
    ------------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       3 - filter("B"."ID"!=1)
       4 - filter("A"."ID"=1 AND "B"."ID"!="A"."ID")
       5 - access("MDSYS"."SDO_EQUAL"("A"."GEOMETRY","B"."GEOMETRY")='TRUE')
    ..... other stuff .....     
    Here is the plan of the explain output for the query is not working. Note that the spatial index is not used.
    SQL> EXPLAIN PLAN FOR
      2  SELECT * FROM (
      3     SELECT b.id
      4     FROM sdo_equal_query_test a, sdo_equal_query_test b
      5     WHERE a.id = 1
      6     AND b.id != a.id
      7     AND sdo_equal(a.geometry, b.geometry) = 'TRUE'
      8  );
    
    Explained.
    
    SQL> @?/rdbms/admin/utlxpls.sql
    
    PLAN_TABLE_OUTPUT
    --------------------------------------------------------------------------------------------------
    Plan hash value: 1024466006
    
    --------------------------------------------------------------------------------------------------
    | Id  | Operation           | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
    --------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT    |                            |     1 |  7684 |     6   (0)| 00:00:01 |
    |   1 |  RESULT CACHE       | 2sd35wrcw3jr411bcg3sz161f6 |       |       |            |          |
    |   2 |   NESTED LOOPS      |                            |     1 |  7684 |     6   (0)| 00:00:01 |
    |*  3 |    TABLE ACCESS FULL| SDO_EQUAL_QUERY_TEST       |     1 |  3836 |     3   (0)| 00:00:01 |
    |*  4 |    TABLE ACCESS FULL| SDO_EQUAL_QUERY_TEST       |     1 |  3848 |     3   (0)| 00:00:01 |
    --------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       3 - filter("B"."ID"!=1)
       4 - filter("A"."ID"=1 AND "B"."ID"!="A"."ID" AND
                  "MDSYS"."SDO_EQUAL"("A"."GEOMETRY","B"."GEOMETRY")='TRUE')
    ..... other stuff .....               

    Yes, this is the bug 9740355. You can get a 11.2.0.1 patch, or wait for 11.2.0.3.

  • Select dynamic in - insert into SQL

    Hello

    I need some advice on how can I change my dynamic SQL code (select in - insert in) to manage changes to the underlying tables.

    My apologies for the long post - I tried to put a minimum, but all the details necessary to explain my situation.

    I have several tables who are employed (pl/sql package) purge [all lines that are three months – based on the value of creation_date]. For example, DDL for tables of the sample and its ARCHIVE for:
    create table actual_table (col1 varchar2(100),
        col2 varchar2(100),
        col3 varchar2(100),
        creation_date date)            
                
    create table actual_table_arc (col1 varchar2(100),
        col2 varchar2(100),
        col3 varchar2(100),
        creation_date date,
        archive_date   date) 
    This work of purging archives also some data according to the needs of the company. Archiving is done using a dynamic sql as below:
    v_arc_sql_stmt := ' INSERT INTO  actual_table_arc'
                    || ' SELECT '|| get_column_name_list('ACTUAL_TABLE') ||', sysdate  FROM actual_table ' 
                    || ' WHERE  col1 = :a';
    
    EXECUTE IMMEDIATE DBMS_LOB.SUBSTR (v_arc_sql_stmt, 32000, 1) USING v_some_id;    
    Note: Please note that I use sysdate in select above to populate the archive_date column.


    Code for the get_column_name_list() function is below:
       FUNCTION get_column_name_list (p_table_name    VARCHAR2)
       RETURN CLOB
       IS
          CURSOR col_name_cur IS
              SELECT a.column_name
                FROM all_tab_columns a
               WHERE a.table_name =  UPPER(p_table_name)||'_ARC'
                   AND EXISTS (SELECT 'Y'
                                        FROM all_tab_columns b
                                       WHERE b.table_name = UPPER(p_table_name)
                                           AND b.column_name = a.column_name)
                ORDER BY a.column_id;
           v_column_list    CLOB;
      BEGIN
          FOR rec IN col_name_cur
          LOOP
             v_column_list := v_column_list || rec.column_name ||',' ;
          END LOOP;
          v_column_list := SUBSTR(v_column_list, 0, LENGTH(v_column_list)-1);
          RETURN v_column_list;
    END get_column_name_list;
    Now: it is necessary to add a new column to the ACTUAL_TABLE table. So new table looks like this:
    actual_table :
       col1 varchar2(100),
       col2 varchar2(100),
       col3 varchar2(100),
       creation_date date,
       new_column varchar2(100)            
    The table of ARC after changing to include the new column looks like this:
    actual_table_arc :
       col1 varchar2(100),
       col2 varchar2(100),
       col3 varchar2(100),
       creation_date date,
       archive_date   date,
       new_column varchar2(100)
    Now the dynamic SQL above won't work because of the order of the columns in the table of ARC and SELECT it.

    Please tell us how I can change the dynamic SQL to absorb this change [and all these changes in the future].

    Thank you

    Published by: Amer-user12033597 22-Aug-2011 08:06

    Hello

    I think that partitioning is included in enterprise edition, you can check which features that you can use with

    select * from v$option
    

    Of what you said

    I have several tables who are employed (pl/sql package) purge [all lines that are three months – based on the value of creation_date].

    You could split the actual_table the date of creation to give monthly partitions - daily or weekly depending on how often you plan to run the purge job. When the job runs, you would identify all partitions of the table that contains data is superior to 3 months - personally I use a naming convention to do it like this.

    DTYLER_APP@pssdev2> CREATE TABLE dt_acutal_table
      2  (   col1 varchar2(100),
      3      col2 varchar2(100),
      4      col3 varchar2(100),
      5      creation_date date
      6  )
      7  PARTITION BY RANGE (creation_date)
      8  (   PARTITION ptn_201101 VALUES LESS THAN (TO_DATE('01/02/2011','dd/mm/yyyy')),
      9      PARTITION ptn_201102 VALUES LESS THAN (TO_DATE('01/03/2011','dd/mm/yyyy')),
     10      PARTITION ptn_201103 VALUES LESS THAN (TO_DATE('01/04/2011','dd/mm/yyyy')),
     11      PARTITION ptn_201104 VALUES LESS THAN (TO_DATE('01/05/2011','dd/mm/yyyy')),
     12      PARTITION ptn_201105 VALUES LESS THAN (TO_DATE('01/06/2011','dd/mm/yyyy')),
     13      PARTITION ptn_201106 VALUES LESS THAN (TO_DATE('01/07/2011','dd/mm/yyyy')),
     14      PARTITION ptn_201107 VALUES LESS THAN (TO_DATE('01/08/2011','dd/mm/yyyy')),
     15      PARTITION ptn_201108 VALUES LESS THAN (TO_DATE('01/09/2011','dd/mm/yyyy')),
     16      PARTITION ptn_201109 VALUES LESS THAN (TO_DATE('01/10/2011','dd/mm/yyyy')),
     17      PARTITION ptn_201110 VALUES LESS THAN (TO_DATE('01/11/2011','dd/mm/yyyy')),
     18      PARTITION ptn_201111 VALUES LESS THAN (TO_DATE('01/12/2011','dd/mm/yyyy')),
     19      PARTITION ptn_201112 VALUES LESS THAN (TO_DATE('01/01/2012','dd/mm/yyyy')),
     20      PARTITION ptn_MaxValue VALUES LESS THAN (MAXVALUE)
     21  )
     22  /
    
    Table created.
    
    DTYLER_APP@pssdev2> CREATE TABLE dt_acutal_table_arc
      2  (   col1 varchar2(100),
      3      col2 varchar2(100),
      4      col3 varchar2(100),
      5      creation_date date
      6  )
      7  PARTITION BY RANGE (creation_date)
      8  (   PARTITION ptn_201101 VALUES LESS THAN (TO_DATE('01/02/2011','dd/mm/yyyy')),
      9      PARTITION ptn_201102 VALUES LESS THAN (TO_DATE('01/03/2011','dd/mm/yyyy')),
     10      PARTITION ptn_201103 VALUES LESS THAN (TO_DATE('01/04/2011','dd/mm/yyyy')),
     11      PARTITION ptn_201104 VALUES LESS THAN (TO_DATE('01/05/2011','dd/mm/yyyy')),
     12      PARTITION ptn_201105 VALUES LESS THAN (TO_DATE('01/06/2011','dd/mm/yyyy')),
     13      PARTITION ptn_201106 VALUES LESS THAN (TO_DATE('01/07/2011','dd/mm/yyyy')),
     14      PARTITION ptn_201107 VALUES LESS THAN (TO_DATE('01/08/2011','dd/mm/yyyy')),
     15      PARTITION ptn_201108 VALUES LESS THAN (TO_DATE('01/09/2011','dd/mm/yyyy')),
     16      PARTITION ptn_201109 VALUES LESS THAN (TO_DATE('01/10/2011','dd/mm/yyyy')),
     17      PARTITION ptn_201110 VALUES LESS THAN (TO_DATE('01/11/2011','dd/mm/yyyy')),
     18      PARTITION ptn_201111 VALUES LESS THAN (TO_DATE('01/12/2011','dd/mm/yyyy')),
     19      PARTITION ptn_201112 VALUES LESS THAN (TO_DATE('01/01/2012','dd/mm/yyyy')),
     20      PARTITION ptn_MaxValue VALUES LESS THAN (MAXVALUE)
     21  )
     22  /
    
    Table created.
    
    DTYLER_APP@pssdev2> CREATE TABLE dt_acutal_table_exch
      2  (   col1 varchar2(100),
      3      col2 varchar2(100),
      4      col3 varchar2(100),
      5      creation_date date
      6  )
      7  /
    
    DTYLER_APP@pssdev2> insert
      2  into
      3      dt_acutal_table
      4      (   col1,
      5          col2,
      6          col3,
      7          creation_date
      8      )
      9  SELECT
     10      TO_CHAR(rownum),
     11      TO_CHAR(rownum),
     12      TO_CHAR(rownum),
     13      SYSDATE - ROWNUM
     14  FROM
     15      dual
     16  CONNECT BY
     17      LEVEL <= 365
     18  /
    
    365 rows created.
    
    DTYLER_APP@pssdev2> commit;
    
    Commit complete.
    
    DTYLER_APP@pssdev2> CREATE OR REPLACE FUNCTION f_Get_Partition
      2  (   ad_PurgeDate    IN DATE,
      3      as_Table        IN VARCHAR2
      4  )
      5  RETURN user_tab_partitions.partition_name%TYPE
      6  IS
      7
      8      ls_Partition        user_tab_partitions.partition_name%TYPE;
      9  BEGIN
     10
     11      SELECT
     12          partition_name
     13      INTO
     14          ls_Partition
     15      FROM
     16          user_tab_partitions
     17      WHERE
     18          partition_name =    'PTN_'|| TO_CHAR
     19                                      (   ad_PurgeDate,
     20                                          'YYYYMM'
     21                                      )
     22      AND
     23          table_name = UPPER(as_Table);
     24
     25      RETURN ls_Partition;
     26
     27  EXCEPTION
     28      WHEN NO_DATA_FOUND THEN
     29          RAISE_APPLICATION_ERROR
     30          (   -20001,
     31              'Partition could not be found for table '||as_Table||' and date '||TO_CHAR(ad_PurgeDate)
     32          );
     33  END;
     34  /
    
    Function created.
    
    DTYLER_APP@pssdev2> CREATE OR REPLACE PROCEDURE p_Arc
      2  (   ad_PurgeDate    IN DATE,
      3      as_SourceTable  IN VARCHAR2
      4  )
      5  IS
      6
      7      ls_ExchSQL          VARCHAR2(4000);
      8      ls_ArcSQL           VARCHAR2(4000);
      9      ls_ArcTable         user_tables.table_name%TYPE;
     10      ls_ExchTable        user_tables.table_name%TYPE;
     11      ls_PartitionName    user_tab_partitions.partition_name%TYPE;
     12
     13  BEGIN
     14
     15      ls_PartitionName := f_Get_Partition
     16                          (   ad_PurgeDate,
     17                              as_SourceTable
     18                          );
     19
     20      ls_ArcTable     := as_SourceTable||'_arc';
     21      ls_ExchTable    := as_SourceTable||'_exch';
     22
     23      ls_ExchSql := ' ALTER TABLE '||as_SourceTable||'
     24                      EXCHANGE PARTITION '||ls_PartitionName||'
     25                      WITH TABLE '||ls_ExchTable
     26                     ;
     27
     28      EXECUTE IMMEDIATE ls_ExchSql;
     29
     30      ls_ArcSql := '  ALTER TABLE '||ls_ArcTable||'
     31                      EXCHANGE PARTITION '||ls_PartitionName||'
     32                      WITH TABLE '||ls_ExchTable
     33                     ;
     34
     35      EXECUTE IMMEDIATE ls_ArcSql;
     36
     37  END;
     38  /
    
    Procedure created.
    
    DTYLER_APP@pssdev2> exec p_Arc(TO_DATE('01/06/2011','dd/mm/yyyy'),'dt_acutal_table')
    
    PL/SQL procedure successfully completed.
    
    DTYLER_APP@pssdev2> select count(*) from dt_acutal_table partition( ptn_201106);
    
      COUNT(*)
    ----------
             0
    
    1 row selected.
    
    DTYLER_APP@pssdev2> select count(*) from dt_acutal_table_arc partition( ptn_201106);
    
      COUNT(*)
    ----------
            30
    
    1 row selected.
    
    DTYLER_APP@pssdev2> alter table dt_acutal_table add (newcol number);
    
    Table altered.
    
    DTYLER_APP@pssdev2> alter table dt_acutal_table_exch add (newcol number);
    
    Table altered.
    
    DTYLER_APP@pssdev2> alter table dt_acutal_table_arc add (newcol number);
    
    Table altered.
    
    DTYLER_APP@pssdev2> exec p_Arc(TO_DATE('01/05/2011','dd/mm/yyyy'),'dt_acutal_table')
    
    PL/SQL procedure successfully completed.
    

    HTH

    David

  • Very slow SELECT * FROM table

    Hello
    could you please help me, that's why execution of
    Select * from myTable very slow (more than an hour and running)?
    Table has 2 GB, BUT the use of the disc is only 5%!

    When I execute select count (*) from myTable, usage is 100% and I have the result in 1 minute...

    Thank you.

    Please see the nets [url http://forums.oracle.com/forums/thread.jspa?messageID=1812597#1812597] when your query takes too long... and [url http://forums.oracle.com/forums/thread.jspa?threadID=863295&tstart=0] HOW TO: post a request for tuning SQL statement - model of accounting. Maybe comment by Sybrand on accounting for the new trace files needs a qualification, as the work through the normal steps of setting before posting the raw trace.

    I think you might have some misconceptions about the work of parallel processing how and what to KEEP. You can also consult the manual of Concepts on when things go in the PGA, and when they go into the CMS and find out how to see the use of the PGA. If you try to load the buffers of the SGA by making a parallel with full table scan, it won't, because this combination uses your PGA and multiblock reads to increase performance - negative in some cases, such as yours.

    In other words, you're [url http://www.doingitwrong.com/wrong/585_munkavedelem.jpg] made wrong.

  • Add more than 2 lines for a select statement without inserting rows in the base table

    Hi all

    I have a below a simple select statement that is querying a table.

    Select * from STUDY_SCHED_INTERVAL_TEMP
    where STUDY_KEY = 1063;

    but here's the situation. As you can see its return 7 ranks. But I must add
    2 rows more... with everything else, default or what exist... except the adding more than 2 lines.
    I can't insert in the base table. I want my results to end incrementing by 2 days in
    measurement_date_Taken on 01-APR-09... so big measurement_date_taken expected to
    end at study_end_Date...



    IS IT STILL POSSIBLE WITHOUT INSERT ROWS IN THE TABLE AND PLAYIHY ALL AROUND WITH
    THE SELECT STATEMENT?

    Sorry if this is confusing... I'm on 10.2.0.3

    Published by: S2K on August 13, 2009 14:19

    Well, I don't know if this request is as beautiful as my lawn, but seems to work even when ;)
    I used the "simplified" version, but the principle should work for your table, S2K.
    As Frank has already pointed out (and I fell on it while clunging): simply select your already existing lines and union them with the 'missing documents', you calculate the number of days that you are "missing" based on the study_end_date:

    MHO%xe> alter session set nls_date_language='AMERICAN';
    
    Sessie is gewijzigd.
    
    Verstreken: 00:00:00.01
    MHO%xe> with t as ( -- generating your data here, simplified by me due to cat and lawn
      2  select 1063 study_key
      3  ,      to_date('01-MAR-09', 'dd-mon-rr') phase_start_date
      4  ,      to_date('02-MAR-09', 'dd-mon-rr') measurement_date_taken
      5  ,      to_date('01-APR-09', 'dd-mon-rr') study_end_date
      6  from dual union all
      7  select 1063, to_date('03-MAR-09', 'dd-mon-rr') , to_date('04-MAR-09', 'dd-mon-rr') , to_date('01-APR-09', 'dd-mon-rr') from dual union all
      8  select 1063, to_date('03-MAR-09', 'dd-mon-rr') , to_date('09-MAR-09', 'dd-mon-rr') , to_date('01-APR-09', 'dd-mon-rr') from dual union all
      9  select 1063, to_date('03-MAR-09', 'dd-mon-rr') , to_date('14-MAR-09', 'dd-mon-rr') , to_date('01-APR-09', 'dd-mon-rr') from dual union all
     10  select 1063, to_date('03-MAR-09', 'dd-mon-rr') , to_date('19-MAR-09', 'dd-mon-rr') , to_date('01-APR-09', 'dd-mon-rr') from dual union all
     11  select 1063, to_date('22-MAR-09', 'dd-mon-rr') , to_date('23-MAR-09', 'dd-mon-rr') , to_date('01-APR-09', 'dd-mon-rr') from dual union all
     12  select 1063, to_date('22-MAR-09', 'dd-mon-rr') , to_date('30-MAR-09', 'dd-mon-rr') , to_date('01-APR-09', 'dd-mon-rr') from dual
     13  ) -- actual query:
     14  select study_key
     15  ,      phase_start_date
     16  ,      measurement_date_taken
     17  ,      study_end_date
     18  from   t
     19  union all
     20  select study_key
     21  ,      phase_start_date
     22  ,      measurement_date_taken + level -- or rownum
     23  ,      study_end_date
     24  from ( select study_key
     25         ,      phase_start_date
     26         ,      measurement_date_taken
     27         ,      study_end_date
     28         ,      add_up
     29         from (
     30                select study_key
     31                ,      phase_start_date
     32                ,      measurement_date_taken
     33                ,      study_end_date
     34                ,      study_end_date - max(measurement_date_taken) over (partition by study_key
     35                                                                          order by measurement_date_taken ) add_up
     36                ,      lead(measurement_date_taken) over (partition by study_key
     37                                                          order by measurement_date_taken ) last_rec
     38                from   t
     39              )
     40         where last_rec is null
     41       )
     42  where rownum <= add_up
     43  connect by level <= add_up;
    
     STUDY_KEY PHASE_START_DATE    MEASUREMENT_DATE_TA STUDY_END_DATE
    ---------- ------------------- ------------------- -------------------
          1063 01-03-2009 00:00:00 02-03-2009 00:00:00 01-04-2009 00:00:00
          1063 03-03-2009 00:00:00 04-03-2009 00:00:00 01-04-2009 00:00:00
          1063 03-03-2009 00:00:00 09-03-2009 00:00:00 01-04-2009 00:00:00
          1063 03-03-2009 00:00:00 14-03-2009 00:00:00 01-04-2009 00:00:00
          1063 03-03-2009 00:00:00 19-03-2009 00:00:00 01-04-2009 00:00:00
          1063 22-03-2009 00:00:00 23-03-2009 00:00:00 01-04-2009 00:00:00
          1063 22-03-2009 00:00:00 30-03-2009 00:00:00 01-04-2009 00:00:00
          1063 22-03-2009 00:00:00 31-03-2009 00:00:00 01-04-2009 00:00:00
          1063 22-03-2009 00:00:00 01-04-2009 00:00:00 01-04-2009 00:00:00
    
    9 rijen zijn geselecteerd.
    

    Is there a simpler way (in SQL), I hope that others join, and share their ideas/example/thoughts.
    I feel that it is using more resources there.
    But I have to cut the daisies before now, they interfere my 'grass-green-ess";)

Maybe you are looking for