Update statement against 1.4 million lines

Hello

I'm trying to run and update statement against a table with more than 1 million lines in it.

NAME Null? Type
------------------------------- --------- -----
NOT NULL NUMBER PR_ID (12.0)
PR_PROP_CODE NOT NULL VARCHAR2 (180)
CLOB VALUE (4000)
SHRT_DESC VARCHAR2 (250)
VAL_CHAR VARCHAR2 (500)
VAL_NUM NUMBER (12.0)
VAL_CLOB CLOB (4000)
UNIQUE_ID NUMBER (12.0)

The update that I'm trying to do is to take the VALUE of the column and based on some update of the parameters one of the three columns. When
I run the sql, it's just there without error. I gave him 24 hours before killing the process.

UPDATE PR. PR_PROP_VAL PV
THE PV VALUE. VAL_CHAR =)
SELECT a.value_char FROM
(
Select
PPV. UNIQUE_ID,
CASE ppv.pr_prop_code
WHEN "BLMBRG_COUNTRY" THEN to_char (ppv.value)
WHEN "BLMBRG_INDUSTRY" THEN to_char (ppv.value)
WHEN "BLMBRG_TICKER" THEN to_char (ppv.value)
WHEN "BLMBRG_TITLE" THEN to_char (ppv.value)
WHEN "BLMBRG_UID" THEN to_char (ppv.value)
WHEN "BUSINESSWIRE_TITLE" THEN to_char (ppv.value)
WHEN "DJ_EUROASIA_TITLE" THEN to_char (ppv.value)
WHEN "DJ_US_TITLE" THEN to_char (ppv.value)
WHEN "FITCH_MRKT_SCTR" THEN to_char (ppv.value)
WHEN "ORIGINAL_TITLE" THEN to_char (ppv.value)
WHEN "RD_CNTRY" THEN to_char (ppv.value)
WHEN "RD_MRKT_SCTR" THEN to_char (ppv.value)
WHEN "REPORT_EXCEP_FLAG" THEN to_char (ppv.value)
WHEN "REPORT_LANGUAGE" THEN to_char (ppv.value)
WHEN "REUTERS_RIC" THEN to_char (ppv.value)
WHEN "REUTERS_TITLE" THEN to_char (ppv.value)
WHEN "REUTERS_TOPIC" THEN to_char (ppv.value)
WHEN "REUTERS_USN" THEN to_char (ppv.value)
WHEN "RSRCHDIRECT_TITLE" THEN to_char (ppv.value)
WHEN "SUMMIT_FAX_BODY_FONT_SIZE" THEN to_char (ppv.value)
WHEN "SUMMIT_FAX_TITLE" THEN to_char (ppv.value)
WHEN "SUMMIT_FAX_TITLE_FONT_SIZE" THEN to_char (ppv.value)
WHEN "SUMMIT_TOPIC" THEN to_char (ppv.value)
WHEN "SUMNET_EMAIL_TITLE" THEN to_char (ppv.value)
WHEN "XPEDITE_EMAIL_TITLE" THEN to_char (ppv.value)
WHEN "XPEDITE_FAX_BODY_FONT_SIZE" THEN to_char (ppv.value)
WHEN "XPEDITE_FAX_TITLE" THEN to_char (ppv.value)
WHEN "XPEDITE_FAX_TITLE_FONT_SIZE" THEN to_char (ppv.value)
WHEN "XPEDITE_TOPIC" THEN to_char (ppv.value)
END value_char
of pr.pr_prop_val on the map
If ppv.pr_prop_code not in
('BLMBRG_BODY', 'ORIGINAL_BODY', 'REUTERS_BODY', 'SUMMIT_FAX_BODY',
'XPEDITE_EMAIL_BODY', 'XPEDITE_FAX_BODY', 'PR_DISCLOSURE_STATEMENT', 'PR_DISCLAIMER')
) a
WHERE
a.UNIQUE_ID = pv.unique_id
AND a.value_char is not null
)
/


Thanks for any help you can provide.

Graham

What about this:

UPDATE pr.pr_prop_val pv
SET    pv.val_char = TO_CHAR(pv.value)
WHERE  pv.pr_prop_code IN ('BLMBRG_COUNTRY', 'BLMBRG_INDUSTRY', 'BLMBRG_TICKER', 'BLMBRG_TITLE', 'BLMBRG_UID', 'BUSINESSWIRE_TITLE',
                           'DJ_EUROASIA_TITLE', 'DJ_US_TITLE', 'FITCH_MRKT_SCTR', 'ORIGINAL_TITLE', 'RD_CNTRY', 'RD_MRKT_SCTR',
                           'REPORT_EXCEP_FLAG', 'REPORT_LANGUAGE', 'REUTERS_RIC', 'REUTERS_TITLE', 'REUTERS_TOPIC', 'REUTERS_USN',
                           'RSRCHDIRECT_TITLE', 'SUMMIT_FAX_BODY_FONT_SIZE', 'SUMMIT_FAX_TITLE', 'SUMMIT_FAX_TITLE_FONT_SIZE',
                           'SUMMIT_TOPIC', 'SUMNET_EMAIL_TITLE', 'XPEDITE_EMAIL_TITLE', 'XPEDITE_FAX_BODY_FONT_SIZE', 'XPEDITE_FAX_TITLE',
                           'XPEDITE_FAX_TITLE_FONT_SIZE', 'XPEDITE_TOPIC')
AND    pv.value IS NOT NULL

Tags: Database

Similar Questions

  • How to get a update statement faster?

    Hello together,

    I have an update statement on a view in line composed of two tables A and B.

    He works and updates the table column has a value of the corresponding entry in the table B.

    This 1,800,000 update records in table A are updated.

    The table, there are a few clues.

    If I drop the update indexes is much faster (instead of 2,5 hours, that just 0.5 hour).
    But the time needed to re-create the index is much more that I saved thanks to the removal of the index. (4 hours!)

    So I do not remove the indexes and perform the update.

    My update statement runs in mode RBO a uses the FULL (A) - tip to prevent oracle to use the index when you create the inline view (connecting A and B).

    It made a difference for the execution of the update statement if it runs in either CBO or RBO mode?

    What can I do to get faster update statement?

    Thank you very much in advance for any help.

    Hartmut cordially

    You can probably tune.

    Here's a guide that should be useful...

    When your query takes too long...

  • line blocking during the update statement


    Hello

    Using Oracle 11 g 2. If I call the following update statement of meeting A, meeting A locks the line until a statement commit / rollback is emanating from the session has. If the session B calls the same statement was updated and the same line, session B will have to wait until the lock is released in the session. However, application developers are expressed in terms of discussions. Would it be possible that the update statement is called in the same session by several requests? If so, the case statement can be evaluated without the line being locked which could lead to erroneous results? Trying to keep this short message.

    tableA has column (number of key, primary) and columnB (number)

    {

    Update tableA

    Define columnB = case when columnB = 3 then

    4

    When columnB = 4 then

    5

    on the other

    columnB

    end

    When columnA = 6;

    }

    2 applications (almost at the same time) in the same session could assess columnB 3. The goal would be the first query sets in column 4 and the second request sets in column 5.

    > However, application developers are expressed in terms of discussions.

    Of course, they are...

    But each application thread will have its own dedicated database session.

  • Do UPDATE statement in line for coherent reading?

    Hi all
    I have a question for the update statement, do UPDATE statement 17Etat for coherent reading? As a statement of pure update: update t set col1 = col1 * 20;

    I'll check with the following steps:
    ######################################
    Preparation: Tables Create1:
    ######################################
    t_update_old (col1 int, col2, col3 float int primary key (2000));

    ######################################
    Preparations: generate data for two tables:
    ######################################
    insert into t_update_old
    Select seq_1.nextval, seq_1.nextval, chr (seq_1.nextval) of double;
    for x in 1.17 loop
    insert into t_update_old
    Select seq_1.nextval, seq_1.nextval, chr (seq_1.nextval) of t_update_old;
    end loop;


    ######################################
    Description of the run
    ######################################
    Create 2 sessions: S1 & S2
    Session S1 updates t_update_old using the following statement:
    Update t_update_old set col2 = col2 * 20;

    S2 session will update t_update_old for a specific line.
    Update t_update_old set col2 is + 1 col2 where col1 = 100000;.

    The two session are performed in the following order:
    1 point in time: S1 began.
    Point in time 2: S2 started.
    Point in time 3: Action update the 100000th ranks in t_update_old by S2 is finished, validation of the transaction in S2 and S2 is finished.
    Point of time 4: update t_update_old of S1 with the 100000th line in t_update_old.
    Point 5 of time: S1 is finished.

    ######################################
    The Question is:
    ######################################
    What is the result of the following statement:
    Select col2 where col1 = 100000;

    100 000 * 20 = 2 000 000
    or
    (100 000 + 1) * 20 = 2 000 020

    By default in Oracle, each SQL statement see data if it exists when begins the execution of SQL statements and the READ COMMITTED isolation level, the effects of each committed transaction are taken into account by statements that start other transactions or validated. The answer is therefore 2 because I understand that the transaction in S2 is committed before start of query in S1.

    Things get complicated if query in S1 starts before request for S2.
    See T. Kyte write notes of consistency:
    http://tkyte.blogspot.com/2005/08/something-different-part-i-of-III.html

  • the update statement modidy

    Hi all

    I have here is the update statement in my code

    UPDATE equip_stg stg

    SET stg.err = 'WRC '.

    WHERE DOES NOT EXIST (SELECT *)

    FROM (SELECT stg.*

    Of equip_stg stg.

    REASON_SETUP sea

    WHERE stg.equipment_fk = mer.equipment_pk

    AND stg.downtime_reason_code = mer.reason_code

    AND ((stg.status = 3 AND sea. REASON_TYPE = 1)

    OR (stg.status = 2 AND sea. REASON_TYPE = 3)

    OR (stg.status = 1 AND sea. REASON_TYPE = 4))

    ERS stg.downtime_reason_code AND IS NOT NULL)

    WHERE ers.equipment_fk = stg.equipment_fk

    AND ers.downtime_reason_code = stg.downtime_reason_code

    )

    AND stg.status IN (3,2,1)

    AND stg.downtime_reason_code IS NOT NULL;

    The statement above sets updated the column equip_stg in WRC if err for corresponding material and reason, there is no line in the REASON_SETUP table.

    I want to change the code above so that the validation above is first done with REASON_SETUP according to equipment and column of type of reason. Is there is no line for this type of material and the reason in this table, then he must hit lookup table.  Example, I have data from equipment EQP1, status = 3, then it should check if REASON_SETUP has everything save for EQP1 equipment and the reasoning of type 1. If she made the next record should be validated on the table otherwise come file should be validated against the lookup table. For the table of choice also I have same join conditions

    STG.equipment_fk = lookup.equipment_pk

    AND stg.downtime_reason_code = lookup.reason_code

    Thanks in advance.

    Kind regards

    Amrit

    Hello

    I think I see.  All lines for each (equipment, reason_type) will be removed completely on a table or another. reason_setup if possible and looking as if this combination never occurs in reason_type.  Another way to say it is that you want only (equipment, reason_type) groups of the best tablle, where reason_type is better than search.  Exactly what made a Request Top - N .  Here's a way to do it:

    UPDATE equip_stg

    SET err_code = "WRC".

    WHERE (equipment, downtime_reason_code) NOT IN

    (

    WITH union_data AS

    (

    SELECT 'A' preference AS, reason_code, reason_type, equipment

    OF reason_setup

    UNION ALL

    SELECT equipment, reason_type, reason_code, 'B' AS preferences

    SEARCH

    )

    got_r_num AS

    (

    SELECT equipment, reason_type, reason_code

    DENSE_RANK () (PARTITION BY MATERIAL, reason_type

    ORDER OF preference

    ) AS r_num

    Of union_data

    )

    SELECT equipment

    DECODE (reason_type

    1, 3

    3, 2

    )

    reason_code

    OF got_r_num

    WHERE r_num = 1

    )

    ;

    To see how this works, reconstruction of the NOT IN subquery, step by step.  First run just union_data as a stand-alone application.  When you understand what it does, run got_r_num and when you see what he's doing, run the set NOT IN subquery.

  • need help with the Update statement

    Hello
    I received a question in a course and I tried my best to respond, and now my brain is giving. I would really appreciate help with the update statement. I don't mind if you do not validate a solution, a little nudge in the right direction would be really useful. I'll post that I got.

    THE QUESTION
    / * For these agents disabled on more than seven missions, change their date of deactivation of the first date of deactivation of all the agents that have been activated in the same year as the agent that you update currently.
    */

    I have it divided into parts, here is my select statement to agents disabled on more than 7 missions, which produces the deactivation_dates in the agents table that I want to update...
    SELECT
    s.deactivation_date
    FROM
    (
    SELECT
    a.deactivation_date,
    count(m.mission_id) as nomissions
    FROM
    agents a
    INNER JOIN
    missions_agents m
    on
    a.agent_id=m.agent_id
    GROUP BY
    a.deactivation_date
    ) s
    WHERE
    s.nomissions>7 AND s.deactivation_date IS NOT NULL
    .. .and the code for the first date of deactivation for each year of activation agent
    select 
    a2.deactivation_date
    from
    agents a2
    where a2.deactivation_date= 
    (
    select min(a.deactivation_date)
    from 
    agents a
    where to_number(to_char(a.activation_date,'YYYY'))=to_number(to_char(a2.activation_date,'YYYY'))
    )
    ..... I am not real to marry these two statements together in the Update statement. I can't extract each date of deactivation produced in the first select statement and their match against the first date of deactivation in the year they have been activated for the second select statement.

    Any help greatly appreciated... :))

    I began to wonder how things would :)

    user8695469 wrote:
    First of all why he chooses the date the earliest of all agents

    UPDATE  AGENTS_COPY AC /* (1) */
    SET     DEACTIVATION_DATE = (
    SELECT  MIN(AGS.DEACTIVATION_DATE)
    FROM    AGENTS_COPY  AGS
    ,       AGENTS_COPY AC /* (2) */
    WHERE   TRUNC(AGS.ACTIVATION_DATE,'YEAR') = TRUNC(AC.ACTIVATION_DATE,'YEAR') /* (3) */
    )
    

    He recovers as soon as the subquery has not been correctly set in the SET clause. It seems you are trying to update a correlated, but we are still having a conceptual shift. I have added a few comments to your code above and below will explain.

    (1): when you do a correlated update it is useful to the table alias that you did right here.

    (2): this table statement is not necessary and is the reason why the FIRST deactivation date is selected. The alias that you use (3) refers to THIS table, not the one defined in the update statement. Remove the line indicated by (2) in the FROM clause and a correlated update will happen.

    and secondly why is it to update each row, when I thought that I'm just the lines where the agents are disabled and missions > 7? Pointers on where I'm wrong would be very appreciated. (SQL = stupid query language!) :)

    user8695469 wrote: then why is it to update each row, when I thought that I'm just the lines where the agents are disabled and missions > 7? Pointers on where I'm wrong would be very appreciated. (SQL = stupid query language!) :)

    
    WHERE EXISTS
    (
    SELECT
    a.agent_id,
    count(m.mission_id)
    FROM
    agents a
    /* INNER JOIN AC ON AC.AGENT_ID = A.AGENT_ID */
    INNER JOIN
    missions_agents m
    ON
    a.agent_id=m.agent_id
    GROUP BY
    a.agent_id,
    a.deactivation_date
    HAVING
    count(m.mission_id)>7 AND a.deactivation_date IS NOT NULL
    )
    

    Once again this problem is similar to the question above that a correlation update doesn't work. Test existence of lines in an EXISTS subquery. Since your subquery is not related to the table that you are trying to update, it will be always return a line and, therefore, it returns true for EACH LINE in the AGENTS table. To limit the game to only agents > 7 missions results, you need to add a join condition that references the table in your update statement. I added one above (with comments) as a sample.

    I recommend you look over all material that you have associated with correlated subqueries, including documents that I posted above. This seems to be what you're having the problem more with. If you need me to explain the concept of correlated queries any better please let me know.

    Thank you!

  • Commit to taking a lot of time for the update statement to bulk - 11 GR 2

    Hi team,

    We have a source table, which is the main table for our application.

    When I commit a transaction after update block, it takes long about 2 to 4 minutes.

    I don't understand why it takes too long...

    Could you please help me on this fix, please...

    It's the details on the table,

    Total number of records: 35 M record

    Validation interval: 500 records / commit

    Total scores: 3

    total number of columns: 95 - including primary, unique and foreign key columns (all columns will be updated for each update because update from the app online)

    Total no of the foreign key columns: 12

    Unique key column: 1

    Total index: 27 ( including the key index foreign 12 + 1 + 1 index of primary key Unique key index )

    example of update statement,

    UPDATE DIRECTORY_LISTING_STG
    SET
      COLUMN_1 = :VAL_1
      .
      .
      .
      COLUMN_94 = :VAL_94
    WHERE RECORD_KEY = :P_KEY_VAL;
    
    

    The table is.

    Plan hash value: 2997337465
    
    --------------------------------------------------------------------------------------------
    | Id  | Operation          | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
    --------------------------------------------------------------------------------------------
    |   0 | UPDATE STATEMENT   |                       |     1 |   308 |     3   (0)| 00:00:01 |
    |   1 |  UPDATE            | DIRECTORY_LISTING_STG |       |       |            |          |
    |*  2 |   INDEX UNIQUE SCAN| XPKDLS_STG_PARTPN     |     1 |   308 |     2   (0)| 00:00:01 |
    --------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       2 - access("RECORD_KEY"=TO_NUMBER(:P_KEY))
    
    

    Details of the database,

    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    "CORE 11.1.0.7.0 Production"
    TNS for Linux: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production
    
    

    Thank you

    RAM

    2793617 wrote:

    Hello

    I am updating table via an application using Hibernate jdbc connection online.

    If the update statement will be sent to the database in a pack of 500 documents (batch update).

    Thanks Ram

    line-by-line UPDATE is slow by slow transformation

    In Oracle no other session can 'see' upgrade from uncommitted changes.

    The only way to significantly reduce the elapsed time is to have only 1 UPDATE & 1 COMMIT!

  • Update statement

    Hi friends,

    Need your help with an update statement.

    Update t set t.icode int_tab =.

    (select ac.icode in the ac_tab ac, ed edition

    where ac.current_code = ed.unit)

    where ed.process_no = t.action_no);

    where ed.process_no = t.action_no

    *

    ERROR on line 4:

    ORA-00904: "ED". "" PROCESS_NO ": invalid identifier

    Basically, we need to update int_tab.icode with ac_tab.icode where ac_tab.current_code = edition.unit

    Thank you very much

    I suggest you MERGE. Inside () I call "source" can be controlled carefully. And then you can easily do the UPDATE

    No funny Executive plans and if he SELECT jobs, UPDATE works as well.

    MERGE into the target int_tab

    USING (SELECT t.rowid, rid, ac.icode,

    Of ac_tab ac

    Join IN-HOUSE ed Edition

    ON ac.current_code = ed.unit

    Join IN-HOUSE int_tab t

    ON ed.process_no = t.action_no

    WHERE t.icode! = ac.icode

    and t.icode is not null) source

    on (target.rowid = source.id)

    WHEN MATCHED

    THEN UPDATE set target.icode = source.icode;

  • Update statement with multiple joints

    Hello

    I use Oracle 12 c and here my Question:

    /*

    ledgerstb is an Oracle the ITO (TransNo column is the primary key)

    vledgerstb_gtt is a global temporary Table

    vledgervc_gtt is a global temporary Table

    */

    UPDATE

    (, SELECT ledgerstb. TransNoVC AS TransNoVC_Old,

    vledgervc_gtt.transnovc AS TransNoVC_New

    OF ledgerstb

    INNER JOIN vledgerstb_gtt ON ledgerstb. TransNo = vledgerstb_gtt.transnostb

    INNER JOIN vledgervc_gtt ON ledgerstb. STBNo = vledgervc_gtt. STBNo

    ) T

    SET T.TransNoVC_OLD = T.TransNoVC_NEW;

    This update statement gives the following error message:

    Error report-

    ORA-01779: cannot modify a column that is mapped to a table not preserved key

    ORA-06512: at "RELYC. STBPKG', line 597

    ORA-06512: at line 201 level

    01779 00000 - 'impossible to change a column that is mapped to a non-preserved table at key'

    * Cause: An attempt was made to insert or update columns in a join finds out who

    map to a table not kept the key.

    * Action: Change the directly underlying base tables.

    For your reference, the following result internal instruction select (in blue color police) give;

    TransNo: 1 STBNo: VAAAABM315711131

    TransNo: STBNo 2: VAAAABM315711214

    TransNo: STBNo 3: VAAAABM315711262

    TransNo: STBNo 4: VAAAABM316410986

    Please suggest if I'm doing something wrong or it is not possible to update the table like that.

    (I know that I can use ForAll Update statement, but I do not want to use Collections here.)

    Thank you carine, finally I found my answer,

    the update query requires that each record that will be updated should be updated only once. so if I do join 1<->1 it updates successfully.

    Thanks to all who contributed, or at least tried. Thanks again.

  • Update statement is a coherent lof 'Get '.

    Hello

    When I run this update statement, it generates a lot of 'get in line ':

    UPDATE PFA_FUSIONFACC WAS

    SET (COD_FACTURA, TOT_CARGO_DB, TOT_CARGO_FB) =

    (SELECT THE NPC. COD_FACTUcaRA AS COD_FACTURA, NPC. TOT_BASEIMPO_E AS TOT_BASEIMPO, NPC. TOT_IMPEMI AS TOT_IMPEMI

    OF NPC, CTA AUX_CTAFACTU PFA_NPCOFACTMES

    where LTC. CTA_FACTURAC = WAS. CTA_FACTURAC

    AND NPC.cod_postal = CTA.cod_postal

    AND NPC.cta_facturac = CTA.cta_facturac)

    where FUS.rowid in

    (SELECT WAS.) ROWID

    NPC PFA_NPCOFACTMES, CTA AUX_CTAFACTU, WAS PFA_FUSIONFACC

    where LTC. CTA_FACTURAC = WAS. CTA_FACTURAC

    AND NPC.cod_postal = CTA.cod_postal

    AND NPC.cta_facturac = CTA.cta_facturac

    AND ROWNUM < 10000);

    9999 filas actualizadas.

    Passed: 01:38:15.47

    Run plan

    ----------------------------------------------------------

    Hash value of plan: 2048200947

    ----------------------------------------------------------------------------------------------------------------------------------

    | ID | Operation | Name                | Lines | Bytes | TempSpc | Cost (% CPU). Time | Pstart. Pstop |

    ----------------------------------------------------------------------------------------------------------------------------------

    |   0 | UPDATE STATEMENT.                     |     1.    62.       |   117 KB (4) | 00:23:27 |       |       |

    |   1.  UPDATE                            | PFA_FUSIONFACC |       |       |       |            |          |       |       |

    |   2.   NESTED LOOPS |                     |     1.    62.       |   117 KB (4) | 00:23:27 |       |       |

    |   3.    VIEW                            | VW_NSO_1 |  9999 |   117 KB |       |   117 KB (4) | 00:23:27 |       |       |

    |   4.     UNIQUE FATE |                     |     1.   478K |       |            |          |       |       |

    |*  5 |      COUNT STOPKEY |                     |       |       |       |            |          |       |       |

    |*  6 |       HASH JOIN |                     |   774K |    36 M |    86 M |   117 KB (4) | 00:23:27 |       |       |

    |   7.        NESTED LOOPS |                     |  2258K |    60 M |       |   101K (4) | 00:20:16 |       |       |

    |   8.         RANGE OF PARTITION ALL THE |                     |  2258K |    32 M |       |  2554 (2) | 00:00:31 |     1.   338.

    |   9.          FULL RESTRICTED INDEX SCAN FAST | PFA_NPCOFACTMES_103 |  2258K |    32 M |       |  2554 (2) | 00:00:31 |     1.   338.

    | * 10 |         INDEX UNIQUE SCAN | PK_AUX_CTAFACTU |     1.    13.       |     2 (0) | 00:00:01 |       |       |

    |  11.        FULL RESTRICTED INDEX SCAN FAST | PFA_FUSIONFACC_I01 |  3923K |    78 M |       |  5358 (2) | 00:01:05 |       |       |

    |  12.    TABLE ACCESS BY ROWID USER | PFA_FUSIONFACC |     1.    50.       |     1 (0) | 00:00:01 |       |       |

    |  13.   TABLE ACCESS BY LOCAL INDEX ROWID | PFA_NPCOFACTMES |     1.    56.       |     2 (0) | 00:00:01 |     1.     1.

    |  14.    NESTED LOOPS |                     |     1.    69.       |    49 (0) | 00:00:01 |       |       |

    | * 15 |     INDEX SKIP SCAN | PK_AUX_CTAFACTU |     1.    13.       |    47 (0) | 00:00:01 |       |       |

    |  16.     RANGE OF PARTITION ITERATOR.                     |     1.       |       |     1 (0) | 00:00:01 |   KEY |   KEY |

    | * 17.      INDEX RANGE SCAN | PFA_NPCOFACTMES_103 |     1.       |       |     1 (0) | 00:00:01 |   KEY |   KEY |

    ----------------------------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    5 - filter(ROWNUM<10000)

    6 - access("CTA".") CTA_FACTURAC "=" WAS ". ("' CTA_FACTURAC")

    10 - access("NPC".") COD_POSTAL "=" CTA ". "" COD_POSTAL "AND"NPC ". "" CTA_FACTURAC "=" CTA ". ("' CTA_FACTURAC")

    15 - access("CTA".") CTA_FACTURAC "(=:B1)"

    filter ("CTA". "CTA_FACTURAC"(=:B1) "

    17 - access("NPC".") COD_POSTAL "=" CTA ". "" COD_POSTAL "AND"NPC ". "CTA_FACTURAC"(=:B1) "

    filter ("NPC". "" CTA_FACTURAC "=" CTA ". ("' CTA_FACTURAC")

    And statistics

    ----------------------------------------------------------

    1 recursive calls

    13252 db block Gets

    200711795 compatible Gets

    407276 physical reads

    Redo 3550216 size

    837 bytes sent via SQL * Net to client

    1320 bytes received via SQL * Net from client

    3 SQL * Net back and forth to and from the client

    2 sorts (memory)

    0 sorts (disk)

    9999 rows processed

    Rollback Terminado.

    But the query used by the update is much less "gets":

    SQL > SELECT WAS. ROWID

    NPC PFA_NPCOFACTMES, CTA AUX_CTAFACTU, WAS PFA_FUSIONFACC

    2 where 3 DEC. CTA_FACTURAC = WAS. CTA_FACTURAC

    AND NPC.cod_postal = CTA.cod_postal

    4 5 AND NPC.cta_facturac = CTA.cta_facturac

    AND ROWNUM < 10000 6;

    9999 filas seleccionadas.

    Passed: 00:00:20.57

    Run plan

    ----------------------------------------------------------

    Hash value of plan: 20234272

    -----------------------------------------------------------------------------------------------------------------------

    | ID | Operation | Name                | Lines | Bytes | TempSpc | Cost (% CPU). Time | Pstart. Pstop |

    -----------------------------------------------------------------------------------------------------------------------

    |   0 | SELECT STATEMENT |                     |  9999 |   937K |       | 29298 (2) | 00:05:52 |       |       |

    |*  1 |  COUNT STOPKEY |                     |       |       |       |            |          |       |       |

    |*  2 |   HASH JOIN |                     |   774K |    70 M |    58 M | 29298 (2) | 00:05:52 |       |       |

    |   3.    RANGE OF PARTITION ALL THE |                     |  2258K |    32 M |       |  2554 (2) | 00:00:31 |     1.   338.

    |   4.     FULL RESTRICTED INDEX SCAN FAST | PFA_NPCOFACTMES_103 |  2258K |    32 M |       |  2554 (2) | 00:00:31 |     1.   338.

    |*  5 |    HASH JOIN |                     |  3906K |   175 M |   123 M | 12842 (2) | 00:02:35 |       |       |

    |   6.     FULL RESTRICTED INDEX SCAN FAST | PFA_FUSIONFACC_I01 |  3923K |    78 M |       |  5358 (2) | 00:01:05 |       |       |

    |   7.     FULL RESTRICTED INDEX SCAN FAST | PK_AUX_CTAFACTU |   147K |  1869K |       |  1120 (1) | 00:00:14 |       |       |

    -----------------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    1 - filter(ROWNUM<10000)

    2 - access("NPC".") COD_POSTAL "=" CTA ". "" COD_POSTAL "AND"NPC ". "" CTA_FACTURAC "=" CTA ". ("' CTA_FACTURAC")

    5 - access("CTA".") CTA_FACTURAC "=" WAS ". ("' CTA_FACTURAC")

    And statistics

    ----------------------------------------------------------

    1 recursive calls

    0 db block Gets

    54736 compatible Gets

    53070 physical readings

    0 redo size

    304491 bytes sent via SQL * Net to client

    7814 bytes received via SQL * Net from client

    668 SQL * Net back and forth to and from the client

    0 sorts (memory)

    0 sorts (disk)

    9999 rows processed

    The "PFA_NPCOFACTMES_103" index is partitioned. I don't know if it's related to this issue:

    SQL > select table_name, uniqueness, separated from all_indexes where index-name = "PFA_NPCOFACTMES_103";

    TABLE_NAME UNIQUENES BY

    ------------------------------ --------- ---

    PFA_NPCOFACTMES NON-UNIQUE YES

    SQL > select TABLE_NAME, COLUMN_NAME, position_colonne from all_ind_columns where index-name = "PFA_NPCOFACTMES_103";

    TABLE_NAME COLUMN_NAME POSITION_COLONNE

    ------------------------------ ------------------------------ ---------------

    PFA_NPCOFACTMES CTA_FACTURAC 2

    PFA_NPCOFACTMES COD_POSTAL 1

    Why the update does not generate as much "becomes"?

    Thanks in advance,

    Jose Luis

    I don't know the reason why at first sight, but it is obvious that the inner nested loop was doing a lot of work (full index scan).

    My first thought was to rewrite the query in a merge statement.

    Perhaps leads to a more effective plan and so no need to delve deeper into the issues you have raised ;-)

    merge into FUS PFA_FUSIONFACC

    a_l'_aide_de)

    SELECT

    NPC. COD_FACTUcaRA AS COD_FACTURA

    NPC. TOT_BASEIMPO_E AS TOT_BASEIMPO

    NPC. TOT_IMPEMI AS TOT_IMPEMI

    OF NPC, CTA AUX_CTAFACTU PFA_NPCOFACTMES

    where

    NPC.cod_postal = CTA.cod_postal

    AND NPC.cta_facturac = CTA.cta_facturac

    ) cta

    (TBT. CTA_FACTURAC = WAS. CTA_FACTURAC)

    When matched then update

    Was SET. COD_FACTURA = cta. COD_FACTURA

    was. TOT_CARGO_DB = cta. TOT_BASEIMPO

    was. TOT_CARGO_FB) = cta. TOT_IMPEMI

    where ROWNUM<>

  • How to optimize the update statement so that the query is reading the same table?

    Hi all

    I have a process with the following UPDATE statement:

    Sales_val update

    SET ship_date =)
    Select max (hist.ship_date)

    FROM sales_hist hist

    WHERE hist. X_ID = A.X_ID

    AND hist. X_VAL = A.X_VAL)

    ) WHERE EXISTS (select * from sales_hist hist where )

    WHERE hist. X_ID = A.X_ID

    AND hist. X_VAL = A.X_VAL

    )

    sales_val - 50 lines mln (unique index)

    sales_hist - 20 mln ranks (unique index)

    I met many problems with waits and locks - how to optimize to avoid locks using the parameters of tables or changes of the declaration? or maybe is there another way to do optimization ?

    Kind regards

    Bolo

    Thank you for that. Collect in bulk Unfortunatelly + FORALL does not work with the types in 10g - solution is to use the FOR loop and is not as effective as FORALL in many cases. I do still some tests to solve this problem - I'll put you once it's done.

    EDIT: hash partitioning + fusion - 3-5 minutes (time), so for now it's the solution to the problem. Thank you all for the great discussion!

  • Cancellation-TBS and very large update statements

    Hello

    to be sure that I'm right...

    Currently, it need to update a table with 230.000.000 lines. A single statement attempted to update 14.000.000 lines, but this update statement failed after 29 hours with ORA-015555.

    1 how long Undo-TBS store blocks? Only until the validation of the actual transaction, isn't?

    2. on the table, we have 7 index on columns. Is it true, that we can reduce the amount of Undo-SCT for this operation if we remove some clues?

    Kind regards

    Mark

    Ask DBA_SEGMENTS or DBA_ROLLBACK_SEGMENTS for segments Undo and V$ UNDOSTAT for usage of Cancel.

    Hemant K Collette

  • What update statement is sent to the database?

    In several pages, I use the Apex line (DML) 4.2 automatic treatment processes. If I do an update of a record, which are columns in the update statement? All columns, only the displayed, only the columns changed, also the PK column?

    As far as I can see now all the columns that are displayed are updated, also the PK, which is display only.

    I ask this, because when the PK is updated (not changed is screen only) all tables in my 75 cases, where this PK is used as FK, will get a lock.

    I did some tests and find out that indeed the Apex is updating all the columns used in the elements by Type of Source is the database column. Even if this item is display only, read-only, etc.!

    My solution now, to prevent the locks, is the following:

    Automated line Fetch:

    Primary key column value containing point: Pxx_ID

    ID of the primary key column: (this is the primary key)

    Pxx_ID (display only, hidden or read-only)

    Source: only when the current value in session state is set to zero

    Source type: static assignment<-->

    Source of value or expression:

    Treatment of automatic line (DML)

    Primary key column value containing point: Pxx_ID

    Primary key column: ID

  • Tuning sql insert that inserts 1 million lines makes a full table scan

    Hi Experts,

    I'm on Oracle 11.2.0.3 on Linux. I have a sql that inserts data into a table of History/Archives of a table main application based on the date. The application table has 3 million lines. and all the lines that are more then 6 months old must go in a table of History/Archives. This was decided recently, and we have 1 million rows that meet this criterion. This insertion in table archive takes about 3 minutes. Plan of the explain command shows a full table scan on the main Board - which is the right thing, because we are pulling 1 million rows in the main table in the history table.

    My question is that, is it possible that I can do this sql go faster?

    Here's the query plan (I changed the names of table etc.)

       INSERT INTO EMP_ARCH
       SELECT *
    FROM EMP M
    where HIRE_date < (sysdate - :v_num_days);
    call     count       cpu    elapsed       disk      query    current        rows
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    Parse        2      0.00       0.00          0          0          0           0
    Execute      2     96.22     165.59      92266     147180    8529323     1441230
    Fetch        0      0.00       0.00          0          0          0           0
    ------- ------  -------- ---------- ---------- ---------- ----------  ----------
    total        4     96.22     165.59      92266     147180    8529323     1441230
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: FIRST_ROWS
    Parsing user id: 166
    Rows     Row Source Operation
    -------  ---------------------------------------------------
    1441401   TABLE ACCESS FULL EMP (cr=52900 pr=52885 pw=0 time=21189581 us)
    
    
    

    I heard that there is a way to use opt_param tip to increase the multiblock read County but did not work for me... I will be grateful for suggestions on this. can collections and this changing in pl/sql also make it faster?

    Thank you

    OrauserN

    (1) create an index on hire_date

    (2) tip 'additional' use in the 'select' query '

    (3) run ' alter session parallel DML'; before you run the entire statement

  • problem by making an update statement

    Hi all
    I should write an update statement
    These tables: country: name, code, population, nextyearr_population...
    nextyear_population is empty, first, I should get population_growth of the picture of the population, then compute the year population next according to population and population_growth corrent

    population: country (in fact no code name of the country), population_growth,...

    So I wrote and I got this error
    Error starting at line 1 in command:
    
    update country c
    set c.nextyear_population=(c.population+ ((c.population*(select pop.population_growth from population pop join country co on co.code=pop.code))/100)
    )
    where country.code=population.country
    Error at Command Line:4 Column:20
    Error report:
    SQL Error: ORA-00904: "POPULATION"."COUNTRY": invalid identifier
    00904. 00000 -  "%s: invalid identifier"
    *Cause:    
    *Action:
    --------------------------------------------------------
    --  File created - Wednesday-May-15-2013   
    --------------------------------------------------------
    --------------------------------------------------------
    --  DDL for Table COUNTRY
    --------------------------------------------------------
    
      CREATE TABLE "intern"."COUNTRY" ("NAME" VARCHAR2(40), "CODE" CHAR(2), "CAPITAL" VARCHAR2(40), "PROVINCE" VARCHAR2(40), "POPULATION" NUMBER, "AREA" NUMBER, "NEXTYEAR_POPULATION" NUMBER) 
     
    
       COMMENT ON COLUMN "intern"."COUNTRY"."NAME" IS 'the country name'
     
       COMMENT ON COLUMN "intern"."COUNTRY"."CODE" IS 'the internet country code (two letters)'
     
       COMMENT ON COLUMN "intern"."COUNTRY"."CAPITAL" IS 'the name of the capital'
     
       COMMENT ON COLUMN "intern"."COUNTRY"."PROVINCE" IS 'the province where the capital belongs to'
     
       COMMENT ON COLUMN "intern"."COUNTRY"."POPULATION" IS 'the population number'
     
       COMMENT ON COLUMN "intern"."COUNTRY"."AREA" IS 'the total area'
     
       COMMENT ON TABLE "intern"."COUNTRY"  IS 'the countries of the world with some data'
    REM INSERTING into intern.COUNTRY
    SET DEFINE OFF;
    Insert into intern.COUNTRY (NAME,CODE,CAPITAL,PROVINCE,POPULATION,AREA,NEXTYEAR_POPULATION) values ('Andorra','ad','Andorra la Vella','Andorra la Vella',69865,468,null);
    Insert into intern.COUNTRY (NAME,CODE,CAPITAL,PROVINCE,POPULATION,AREA,NEXTYEAR_POPULATION) values ('United Arab Emirates','ae','Abu Dhabi','Abu Dhabi',2523915,82880,null);
    Insert into intern.COUNTRY (NAME,CODE,CAPITAL,PROVINCE,POPULATION,AREA,NEXTYEAR_POPULATION) values ('Afghanistan','af','Kabul','Kabul',28513677,647500,null);
    
    
    
    
    --------------------------------------------------------
    --  File created - Wednesday-May-15-2013   
    --------------------------------------------------------
    --------------------------------------------------------
    --  DDL for Table POPULATION
    --------------------------------------------------------
    
      CREATE TABLE "intern"."POPULATION" ("COUNTRY" CHAR(2), "POPULATION_GROWTH" NUMBER, "INFANT_MORTALITY" NUMBER) 
     
    
       COMMENT ON COLUMN "intern"."POPULATION"."COUNTRY" IS 'the country code'
     
       COMMENT ON COLUMN "intern"."POPULATION"."POPULATION_GROWTH" IS 'population growth rate (percentage, per annum)'
     
       COMMENT ON COLUMN "intern"."POPULATION"."INFANT_MORTALITY" IS 'infant mortality (per thousand)'
     
       COMMENT ON TABLE "intern"."POPULATION"  IS 'information about the population of the countries'
    REM INSERTING into intern.POPULATION
    SET DEFINE OFF;
    Insert into intern.POPULATION (COUNTRY,POPULATION_GROWTH,INFANT_MORTALITY) values ('ad',0.95,4.05);
    Insert into intern.POPULATION (COUNTRY,POPULATION_GROWTH,INFANT_MORTALITY) values ('ae',1.54,14.51);
    Insert into intern.POPULATION (COUNTRY,POPULATION_GROWTH,INFANT_MORTALITY) values ('af',4.77,163.07);
    Thanks in advance

    I'm using oracle11g and ubuntu 12


    best, david
    update  country c
       set  nextyear_population = population + population * (select pop.population_growth from population pop where c.code = pop.country) / 100
     where  code in (
                     select  country
                       from  population
                    )
    /
    

    SY.

Maybe you are looking for