Bottleneck in bulk updates/inserts

I worked on doing updates in block and inserts through sqlapi ++ using a library of extention udpate in bulk developed internally long ago. The oracle database is. Inserts/changes in bulk are in batches of 50K. I am facing a problem of very specific performance with respect to these bulk operations.

The first batch of insert in block (50K records) is done within 3 seconds; While the next similar batch for bulk insert or update (50 files again) takes a second huge 373. 'Top' on a RHEL-AS-4 server using, I could see what the oracle process that includes all of the 373 secods for completion; so sqlapi ++ or the in-house name extension isn't the culprit.

The third batch of 50K records in the sequence take much longer (second 913). The time will increase exponentially; and it doesn't seem to be > all < model out of it.

Surprisingly, this is not consistent. On a good day, I have the second batch browsing in 3 seconds. All the documents intact and perfect without any kind of defect in the data. In fact, all of the following prizes would end in or about 3 seconds.

Even more surprising, if I truncate the table and begin the process, the problem will reappear. He would again begin taking 370-380 seconds for 2nd batch. Once again, if I had used "delete from ' query instead of 'truncate table' query to delete all records from the table, there is no problem!

So in short, I came to the conclusion that the bottleneck occurs when the table is truncated (or is created new), and not when all records are deleted using the query "delete from.

All guess why it could be produced? I admit that I am not very good in data bases, so any help would be much appreciated.

Thanks in advance.

-
Shreyas

shreyas_kulkarni wrote:

Well, I ran the SPÖ in bulk a new with statspack snapshots before and after the operation. as suggested by Jonathan, used the old hash value (for the operation of your time UPDATING) of the statspack report for sql_id and prepare the execution plan. I could of course.

Then I recompiled the program to generate the behavior that is 'good '. has run the op in bulk, statspack report generated. and surprise... There is no UPDATE statement in the report. UPDATE was simply not fired by the server; While in the test program, the rows affected by UPDATE operations have been exactly the number of lines in input (which is just, according to the supplied input data). so now it seems that somehow the server decides not to trigger the UPDATE, but still (rightly) returns all the rows as rows affected.

While I understand that statspack report contains not all queries fired; If the correct entry, to INSERT to get me fired, is fed, the op of the INSERT, which takes considerably takes less time the UPDATE operation, finds its place in the report. so I think that in the case of UPDATE, the UPDATE query is not triggered, and so 'good' behavior reports 3 seconds, while 'bad' behavior that triggers the UPDATE reports 440 seconds.

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------
SQL_ID  b5s6vazhy5a1g, child number 0
-------------------------------------
UPDATE MDS_MemberRelationMap SET parentlevelid = :0,childlevelid = :1, childdimid =
:2, childleveldimid = :3, parentleveldimid = :4 WHERE (parentmemberid = :5) AND
(childmemberid = :6) AND (parentdimid = :7)

Plan hash value: 4110530053

--------------------------------------------------------------------------------------------
| Id  | Operation          | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT   |                       |       |       |     2 (100)|          |
|   1 |  UPDATE            | MDS_MEMBERRELATIONMAP |       |       |            |          |
|*  2 |   TABLE ACCESS FULL| MDS_MEMBERRELATIONMAP |     1 |   104 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------- 

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(("PARENTMEMBERID"=TO_NUMBER(:5) AND "CHILDMEMBERID"=TO_NUMBER(:6) AND
                    "PARENTDIMID"=TO_NUMBER(:7)))

Note
-----
- dynamic sampling used for this statement

It seems that I did not yet understand the logic of your 'server' process, but if this seems to be a reasonable explanation to you that, in the case of the 'good' performance update is not performed at all, then this might be the point.

But looking at the execution of the UPDATE plan you don't think it might be better to accomplish the UPDATE using an index? Is there an index on the parentmemberid columns, the childmemberid and the parentdimid of MDS_MEMBERRELATIONMAP?

Moreover, it seems that you have no statistics at all on the MDS_MEMBERRELATIONMAP table because the optimizer using dynamic sampling, which takes some time to run because the optimizer runs a query on your table using predicates to check the cardinality in the analysis of the declaration.

I already mentioned that the execution of a single update requires that an average of 22 000 block gets by executing the statspack report you posted previously. Which is probably due to the carried out full table scan. If the three predicates parentmemberid, the childmemberid and the parentdimid only identify a few rows (or may be even one), then it is probably much faster to have an index suitable for the execution of the update.

Kind regards
Randolf

Oracle related blog stuff:
http://Oracle-Randolf.blogspot.com/

SQLTools ++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676 /.
http://sourceforge.NET/projects/SQLT-pp/

Tags: Database

Similar Questions

  • No exceptions of found data in bulk updates

    I'm trying to catch any exception found data in bulk updates when it cannot find a record to update in the forall loop.


    P & c OPENED.
    LOOP
    EXTRACTION casulaty
    BULK COLLECT INTO v_cas, v_adj, v_nbr
    LIMIT 10000;


    FORALL i IN 1.v_cas.count
    UPDATE tpl_casualty
    Set casualty_amt = (select TN from tpl_adjustment where cas_adj = v_adj (i))
    where cas_nbr = v_nbr (i);
    EXCEPTION WHEN NO_DATA_FOUND THEN dbms_output.put_line ('exception')


    I get this error at the line where I'm the exception:
    PLS-00103: encountered the symbol "EXCEPTION" when expecting one of the following conditions:

    begin case declare end exit for goto if loop mod null pragma
    raise return select update while < ID >
    < between double quote delimited identifiers of > < a variable binding > < <
    Close current delete fetch locking insert open rollback
    SAVEPOINT SQLExecute set pipe fusion commit forall

    Can someone direct me pls on how to work around this problem?
    If I do not handle the exception, the script fails when it tries to update a record that does not exist and the error says: no data available exception.

    Thanks for your help.

    Published by: user8848256 on November 13, 2009 18:15

    No data found is not an exception that is thrown when an UPDATE cannot find all files.

    % ROWCOUNT SQL can be used to determine the number of rows affected by an update statement, but if 0 rows are updated, no exception will be thrown (it's just not how things work).

    If you post your real return of CURSOR (injured), it is quite possible, that we can help you create a single SQL statement to meet your needs (a single SQL will be faster than your current implementation).

    Have you looked at using the MERGE command?

  • How to manage the update/insert in display with Outer Join object?

    Hello

    I have a problem in the treatment of update/insert in the original Version that contains two EOs with right outer join. The first EO values are inserted before and I want if second values EO already exists, it will update and if not a new record created.

    Error when I commit after entering values is: ' entity line with null key is not found in SecondEO. "

    What is the solution?

    Thank you

    Hello

    Make sure that your view object, you have included the key attributes of the two entity objects.

    Kind regards

    Saif Khan.

  • What is the use of refresh after Update/Insert in Wizard EO?

    Hello!

    What is the use of refresh after Update/Insert in Wizard EO? When I need it and when not?

    Thank you.

    -These checkboxes are for columns whose values change after that db triggers run.

    BR, 906099

  • Need help writing an update / insert with linked tables

    I am new to ColdFusion. I am learning to write querys and creates a small application to collect information from visitors to my web site. (It's also a good way for me to learn this language) I'm having a problem and it is not only the way to use an update / insert with related tables. I don't know if I'm still gather the appropriate variables to compare them to existing DB records until his execution is the update or insert some querys. Can someone help me, show me how can I update / insert related tables and maybe tell me if I create the varibales good to the compairison? This is my code, I commented out.

    <! - creating a variable to compare with the db table - >
    < cfset userIP = ('#CGI.) REMOTE_ADDR #') >

    <! - run the query and compare the cfset cell remote_addr - >
    < name cfquery = 'userTracking' datasource = "" #APPLICATION.dataSource # "dbtype ="ODBC">"
    SELECT REMOTE_ADDR
    Of user_track
    WHERE REMOTE_ADDR = #userIP #.
    < / cfquery >

    <!-if the record exists, then run this update-->
    < cfif userTracking EQ userIP >
    < cfquery datasource = "#APPLICATION.dataSource #" >
    UPDATED user_track, trackDetail
    SET user_track. REMOTE_ADDR = < cfqueryparam value = '#Trim (CGI.' ' REMOTE_ADDR) # "cfsqltype ="CF_SQL_VARCHAR">.
    user_track. Browser = < cfqueryparam value = '#Trim (CGI.' "HTTP_USER_AGENT) #" cfsqltype = "CF_SQL_VARCHAR" >.
    user_track.visits = visits + 1,
    trackDetail.date = < cfqueryparam value = "#Now ()" # "cfsqltype ="CF_SQL_TIMESTAMP">,"
    trackDetail.path = < cfqueryparam value = "#Trim (PATH_INFO)" # "cfsqltype ="CF_SQL_LONGVARCHAR">"
    WHERE REMOTE_ADDR = < cfqueryparam value = '#Trim (CGI.' ' REMOTE_ADDR) # "cfsqltype ="CF_SQL_VARCHAR">
    < / cfquery >
    < cfelse >

    <! - if it isn't, then insert a new record-->
    < datasource = "" #APPLICATION.dataSource # cfquery "dbtype ="ODBC">"
    INSERT INTO user_track, trackDetail
    (user_track. REMOTE_ADDR, user_track.browser, user_track.visits, trackDetail.userID, trackDetail.date, trackDetail.path)
    VALUES)
    < cfqueryparam value = '#Trim (CGI.' ' REMOTE_ADDR) # "cfsqltype ="CF_SQL_VARCHAR">.
    < Len (Trim (HTTP_USER_AGENT)) GT 1 cfif >
    < cfqueryparam value = '#Trim (CGI.' "HTTP_USER_AGENT) #" cfsqltype = "CF_SQL_VARCHAR" >.
    < / cfif >
    visits + 1,
    < cfqueryparam value = '#Trim (CGI.' "HTTP_USER_AGENT) #" cfsqltype = "CF_SQL_VARCHAR" >.
    < cfqueryparam value = "" #user_track.userID # "cfsqltype ="CF_SQL_VARCHAR">,"
    < cfqueryparam value = "#Now ()" # "cfsqltype ="CF_SQL_TIMESTAMP">,"
    < cfqueryparam value = "#Trim (PATH_INFO)" # "cfsqltype ="CF_SQL_LONGVARCHAR">"
    )
    < / cfquery >
    < / cfif >


    I'm close on this? This throws any errors, but it is not no longer works. It is so obviously wrong. I get a cfdump the end of my query of compairison, but once it hits the stated case, it is lost.

    Thanks for your time no matter who.

    Newbie

    You must define the variable before you can use it.  You try to use it on line 1 of your model.

  • Update/Insert into mapping

    Hello world

    I need your help please. I would like to make a mapping update/insert.
    I have a target with 12 fields table. The first 6 fields are defined as unique constraint ONE LOOK. The other 6 fields are normal attributes. It is only a mapping between a table source and target.
    My problem is: if I run the map, the first time it will insert all correct records in the target table. If I run this mapping it will end again with this warning:
    ORA-00001: unique constraint (SCHEMAUSER. UNIQUE_KEY) violated
    I already found this link in the forum: Re: conversion of MERGE statements in maps
    As I clicked on the target table and selected "UNIQUE_KEY" (my constraint name) for "Game by constraint" in the table on the left operator properties.
    My properties for all my 6 unique constraint fields are:
    Load the column when new line: No.
    Update operation: =.
    Matches columns when new line: YES
    Charge when inserting line column: YES
    Corresponds to the column when you delete lines: YES

    My properties for my attribute fields are:
    Load the column when new line: YES
    Update operation: =.
    Matches columns when new line: No.
    Charge when inserting line column: YES
    Corresponds to the column when you delete lines: No.

    I think I have a problem in one of these properties.
    Once again my problem: I'm not able to run the map without warnings to insert new records for table or update the existing ones.
    Please help me.

    Thank you
    David

    This has always worked for me.
    Doesd solution is:
    Right-click the mapping-> configuration-> default operating mode = baseline.

    But what is causing the performance degradation.

    Click on generate option in the Menu
    Then generation result
    Select intermediate
    After that, select the output of the filter Group
    Run this result in the database and verify that you get double in six columns.

  • Bottleneck on reqID generation, insertion of line

    Running CF7.02 and SQL Server 2000, we get the bottlenecks to the database insertions in our table of queries by far more active. In the transaction database, we check to see if the request is a duplicate. Then we make a cflock for exclusive type with 90, column key 1 key set timeout max current, insert the new line and cancel the lock. We then do CFLDAPs to check the DUP and then change an ldap group in order to take account of the request. While the transaction ends.

    We get a lot of timeouts while trying to make the cflock. Under the terms of the user load high (3 - 4 times normal number of visitors to the site), the ColdFusion server becomes unresponsive.

    Is there something we can do on the database or the server side ColdFusion app to reduce wait times and to maintain our site? Any suggestion is appreciated.

    Farfel wrote:
    > Works under CF7.02 and SQL Server 2000, we get the bottlenecks
    > trying the database inserts in our table of queries by far more active. Within the
    > the transaction database, we check to see if the request is a duplicate. Then
    > we do a cflock for exclusive type with 90, column in the set of keys 1 time-out more
    > current max key, insert the new line and cancel the lock. We then CFLDAPs to
    > look for the DUP and then change an ldap group in order to take account of the request. Then, the
    > transaction ends.

    MS SQL Server locking predicate so that each row see you in a
    transaction is locked until the end of your transaction. Since you are
    style MAX request you read the entire table and all the
    table is locked until the validation. Given that this includes the LDAP call protocol which
    can take a long time. (Try, open 2 windows in Query Analyzer, in the)
    first descent "begin transaction; Select max (X) of Y; Insert into X
    (y) values (z); "and then the same thing in the second screen and you will be
    See what she's waiting for a commit on the first screen).

    To resolve this problem, either use another way to generate the ID next or
    upgrading to MS SQL Server 2005 and snapshot isolation switch.

    Jochem

    --
    Jochem van Dieten
    Adobe Community Expert for ColdFusion

  • problem with UPDATE/INSERT loading type

    Hello

    I use the owb11g

    all I'm doing is loading in target table using update/insert, type of loading.

    There are duplicates in the table of the CBC, and I PK defined on the target table.

    whenever I run the map, he said NO to READ FROM SOCKET DATA.

    What is the solution? can there be a better way to do this?

    Help, please

    Why keep you posting messages doubles for the same problem?

    Please continue with your post (link below)
    No more data to read from socket

  • Error installing updates "insert microsoft XP Professional disk and click ok.

    original title: "insert the microsoft XP Professional disk and click ok.

    When I try to install my windows xp updates, I get the message "insert the microsoft XP Professional disk and click ok. If I had this drive, it's long since disappeared. Is there another way I can get my updates?

    Hi Nick ev.

    Follow the steps in this document to complete a system restore. This process will restore your computer to a previous point without affecting your personal information. If you have recently installed software or updates, and you select a restore point before installation, the software and updates will have to be resettled.

  • Windows 8 updated / inserted media is not valid

    Windows 8.1 all of a sudden (probably a broken update package) starts a loop to prepare auto repair. So I decided to do a refresh

    The laptop computer was installed with Windows 8.0 and upgrades to Windows 8.1. Now, it does not recognize the media.

    How can I update Windows 8.1 when only 8.0 Windows Media are available.

    Hello

    Thanks for posting your question in the Microsoft Community. We will help you in the matter of fixing

    Was this a copy of Windows 8 retail or OEM (original equipment manufacturer)?

    Check out the following links and check if that helps fix the problem.

    How to refresh Windows 8.1 if upgraded from Windows 8, and files are missing.

    http://answers.Microsoft.com/en-us/Windows/Forum/windows8_1-system/how-to-refresh-Windows-81-if-upgraded-from-Windows/6b828a53-3267-46EC-b8d5-14e3996790d1?page=1&TM=1422759119185

    Windows Update / Reset Issues: The inserted Media is not valid

    http://answers.Microsoft.com/en-us/Windows/Forum/windows8_1-system/Windows-refresh-reset-issues-the-media-inserted-is/da87c6d9-A622-476e-aaab-fbf6932eb85e

    If you don't have disc to install Windows or the system repair disc, contact the manufacturer of the computer.

    If you use the copy of the retail of Windows 8, you can consult the following link for the Windows 8/8.1 installation disk.

    How to create the installation media for a PC refresh or reset:

    http://Windows.Microsoft.com/en-us/Windows-8/create-reset-refresh-media

    The Windows upgrade with only a product key.

    http://Windows.Microsoft.com/en-GB/Windows-8/upgrade-product-key-only

    I hope this helps. Please let us know if you more help.

  • Use with need to collect in bulk to insert records from multiple tables

    Hello

    I plsql record type with several tables with multiple columns. so when I used bulk collect with education for ALL. I want to insert records in multiple tables.

    Please give me suggestions.

    ForAll is designed to be used with a single DML statement, which may include dynamic SQL statements. However, I do not know what advantage this will give you your list iteration save several times, one for each table - especially since there is an air show with SQL dynamic.

    Example 1 (dynamic SQL):

    begin

      ...

      forall i in vRecList.First..vRecList.Last
        execute immediate '
        begin
          insert into Table1 (Col1, Col2, Col3) values (:1, :2, :3);
          insert into Table2 (Col1, Col2, Col3) values (:1, :2, :3);
        end;' using vRecList(i).Col1, vRecList(i).Col2, vRecList(i).Col3;
    end;

    Another approach that I should work (but not tested) is using to insert all the Scriptures and based record inserts, but you need to try on your version of Oracle forall has changed between the versions.  In this case vRecList must be compatible with the Table % ROWTYPE and Table2% ROWTYPE type.


    Example 2 (insert all):

    begin

      ...

      forall i in vRecList.First..vRecList.Last

        insert all

          into Table1 values vRecList(i)
          into Table2 values vRecList(i)
        select 1 from dual;

    end;

  • Fetch Bulk collect Insert error

    CREATE OR REPLACE PROCEDURE bulk_collect_limit (StartRowOptional in NUMBER, EndRowOptional number, fetchsize in NUMBER)

    IS

    SID TYPE TABLE IS NUMBER;

    Screated_date TYPE IS an ARRAY OF DATE;

    Slookup_id TYPE TABLE IS NUMBER;

    Surlabasedesdonneesdufabricantduballast ARRAY TYPE IS VARCHAR2 (50);

    l_sid sid;

    l_screated_date screated_date;

    l_slookup_id slookup_id;

    l_sdata surlabasedesdonneesdufabricantduballast;

    l_start NUMBER;

    ID IS of SELECT CURSOR of c_data, created_date, lookup_id, data1 FROM big_table WHERE id > = StartRowOptional AND id < = EndRowOptional;

    Reclist TYPE IS an ARRAY OF c_data % ROWTYPE;

    reclist REB;

    BEGIN

    l_start: = DBMS_UTILITY.get_time;

    OPEN c_data;

    LOOP

    Fetch the c_data COLLECT in BULK IN CER LIMIT fetchsize;

    BECAUSE me IN REB. FIRST... REB. LAST

    LOOP

    INSERT INTO values big_table2 (REB (i) user.user, REB (i) .created_date, recs (i) .lookup_id, (i) recs .data1);

    END LOOP;

    OUTPUT WHEN c_data % NOTFOUND;

    END LOOP;

    C_data CLOSE;

    COMMIT;

    Dbms_output.put_line ('Total elapsed:-' |) (DBMS_UTILITY.get_time - l_start) | "hsecs");

    EXCEPTION

    WHILE OTHERS THEN

    LIFT;

    END;

    /

    DISPLAY ERRORS;

    WARNING: the execution is completed with warning

    29/87 PLS-00302: component "DATA1" must be declared

    29/87 PL/SQL: ORA-00984: column not allowed here

    29/6 PL/SQL: statement ignored

    I get the error error above in the insert statement.

    Please can I get help to solve.

    I won't answer your question, but say you something else - do not do this with bulk collect. Do it in a single SQL statement.

    Stop using loops and by engaging in loops.

    Who will solve the error, makes it less likely, you get error ORA-01555, create less recovery and be more effective.

    Oh, and it does nothing useful:

    EXCEPTION

    WHILE OTHERS THEN

    LIFT;

    The entire procedure should be:

    CREATE OR REPLACE PROCEDURE bulk_collect_limit (startrow IN NUMBER,endrow IN NUMBER,fetchsize IN NUMBER)
    IS
    
     l_start NUMBER;
    
    begin
    
    insert into big_table2(put a column list here for crikey's sake)
    select id,created_date,lookup_id,data1 FROM big_table WHERE id >= startrow AND id <= endrow;
    
    DBMS_OUTPUT.put_line('Total Elapsed Time :- ' || (DBMS_UTILITY.get_time - l_start) || ' hsecs');
    
    end;
    
  • How to temporarily disable the update insert dates?

    Hello!

    DW CC on MacOS 10.11.3 2015.1

    I use "Insert date" (see: https://helpx.adobe.com/dreamweaver/using/insert-dates.html) to track the most-recent-update of content pages.  The time stamp is displayed after "Page updated:"about content. "

    I just discovered a systematic error in repeated, without reusable content in 1500 + pages. Oh!

    The solution is simple and I would like to use search and replace in DW to do the job, but I want to leave all the most recent timestamps in these pages intact, that this correction does not affect content.

    My question: how to temporarily disable the DW mechanism that updates the timestamp in the markup?

    I suspect that there is a visible JS file that implements the update, and the path to success is to temporarily replace an inert version of the same name, but before you start digging for this...

    TIA

    The file you need to disable temporarily is at the following location in Windows: C:\Program Files\Adobe\Adobe Dreamweaver CC 2015\configuration\Translators\Date.htm. He will be in a situation similar to Mac OS X in the Applications folder.

    If you use an earlier version of Dreamweaver on a PC running 64-bit Windows, see C:\Program Files (x 86).

    Date.htm contains two JavaScript functions: getTranslatorInfo() and translateMarkup() you need to temporarily replace with dummy functions.

    Know that you will be editing a program file. Needs administrator privileges to do this. As long as you know what you're doing, it should not cause problems, but you do at your own risk.

  • MERGE WITH SEVERAL UPDATES/INSERT - help

    MERGE INTO CUSTOMER DC WITH THE HELP OF ACCOUNT MDC ON(DC.) CUSTOMER_KEY = MDC. ACC_KEY)

    WHEN MATCHED THEN
    UPDATE ALL Col WHERE MDC. INS_UPD_SCD = 'UPD '.
    UPDATE SET WHERE THE MDC. INS_UPD_SCD = "SCD".
    INSERT WHERE THE MDC. INS_UPD_SCD = "SCD".
    WHEN NOT MATCHED THEN INSERT

    Hi all

    I wanted to know if it is possible to have multiple statements (UPDATE and INSERT) when you use MERGE and WHEN MATCHED. I want a kind of loop to implement within the merger, it is possible. Please give me an example of syntax, thanks for your help.

    Ok. Then apply the MDC. INS_UPD_SCD = "SCD" condition only for required columns. Like this.

    UPDATE SET COL1 = CASE WHEN (MDC. INS_UPD_SCD = 'UPD') THEN COL1,.

    WHEN (MDC. INS_UPD_SCD = "SCD") THEN COL11

    END,

    COL2 = CASE WHEN (MDC. INS_UPD_SCD = 'UPD') THEN IN COL2,.

    END,

    COL3 = CASE WHEN (MDC. INS_UPD_SCD = 'UPD') THEN IN COL2,.

    END

    .

    .

    .

    COLN = CASE WHEN (MDC. INS_UPD_SCD = 'UPD') THEN COLN,.

    WHEN (MDC. INS_UPD_SCD = 'CPC') THEN COLNN

    END;

  • Update/Insert Trigger help

    Hello

    I have 3 tables and his descriptions are as follows

    tab1 (tno1 number, tname1 varchar2 (10));

    tab2 (tno2 number, tname2 varchar2 (10), value_type varchar2 (10), transaction_type varchar2 (10), date of transaction_date);

    tab 3 (tno3 number, tname3 varchar2 (10));

    I want to insert values in tab1 tab2 gets updated based on research

    Suppose if I stated

    setting a day of tab1 set tname1 = 'ABC' where tno1 = 1234;

    Then the values of tab1 and tab 3 should save in tab2. Example of

    (1) setting a day tab1 set tname1 = 'ABC' where tno1 = 1234;

    (2) select * from tab2.

    tno2 tname2 transaction_type transaction_date value_type

    1234 XXX OLD UPDATE 30-SEP-2013

    1234 YYY NEW UPDATE 30-SEP-2013

    Second row showing tname2 YYY that belongs to tab3 (tname3)

    TNO1, tno2 and tno3 is common for table tab1 and tab2, tab3 respectively.

    How can I do that please help me...

    Thanks in advance...

    I'm going to ignore previous comments because they don't make sense.

    How are you with values you "go running" when it is a trigger?   You pass values in the procedures and functions, but not of triggers.

    Based on your original description, it seems to work for me...

    SQL > create table tab1 (tno1 number, tname1 varchar2 (10))
    2.

    Table created.

    SQL > create table tab2 (tno2 COMP, tname2 varchar2 (10), value_type varchar2 (10), transaction_type varchar2 (10), transaction_date date)
    2.

    Table created.

    SQL > create table tab 3 (tno3 number, tname3 varchar2 (10))
    2.

    Table created.

    SQL > insert into tab 3 values (1234, "YYY")
    2.

    1 line of creation.

    SQL > create or replace trigger trg_update_tname
    2 after insert or update tname1 on tab1
    3 for each line
    4 start
    5 If the update then
    6 insert into tab2 values (: new.tno1,: new.tname1, 'OLD', 'UPDATE', sysdate);
    7. Insert tab2
    8 select: new.tno1, tname3, 'NEW', 'UPDATE', sysdate
    9 in tab 3
    10 where tno3 =: new.tno1;
    11 end if;
    12 If insertion then
    13 insert in tab2
    14 select: new.tno1, tname3, 'NEW', 'UPDATE', sysdate
    15 of tab 3
    where the 16 tno3 =: new.tno1;
    17 end if;
    18 end;
    19.

    Trigger created.

    SQL > insert into tab1 values (1234, "XXX")
    2.

    1 line of creation.

    SQL > select * from tab2
    2.

    TNO2 TNAME2 REPLACEMENT TRANSACTION_DATE VALUE_TYPE
    ---------- ---------- ---------- ---------- --------------------
    1234 YYY NEW UPDATE OCTOBER 1, 2013 07:55:07

    SQL > update of tab1 set tname1 = 'ABC' where tno1 = 1234
    2.

    1 line update.

    SQL > select * from tab2
    2.

    TNO2 TNAME2 REPLACEMENT TRANSACTION_DATE VALUE_TYPE
    ---------- ---------- ---------- ---------- --------------------


    1234 YYY NEW UPDATE OCTOBER 1, 2013 07:55:07
    ABC UPDATE FORMER 1234 OCTOBER 1, 2013 07:55:07
    1234 YYY NEW UPDATE OCTOBER 1, 2013 07:55:07

    If is not what you want then you will need to explain in more detail.

Maybe you are looking for