Insert APPEND Hint in several tables

Hi all:

Let me say at the outset that I have no dataset example of post, it is just an issue where I'm hoping to get some directions to help out my own query.

I have an insert multi-table into two tables. When I use the indicator append, the plan of the explain command is no indication that the indicator is respected. But if I change the statement and perform an insert in a single table, then I see "LIVE LOAD INTO" in the plan.

I'm stumped as to how to begin troubleshooting. Does anyone have ideas on how you might tackle a problem like this?

Thanks in advance!

Looks like it works right out of the box. Here is an example of using 11.2.0.2:

create table t1 (c1 number) parallel 2;
create table t2 (c1 number) parallel 2;

alter session enable parallel dml;

explain plan for
insert all
  into t1 (c1) values (rnd)
  into t2 (c1) values (rnd)
(
select
  1 as rnd
from
  dual
connect by level < 10000
);

SELECT * FROM TABLE(dbms_xplan.display);

rollback;

alter session disable parallel dml;

Here is the output of DBMS_XPLAN showing the LIVE MODE on several tables:

Plan hash value: 2997348579

----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                   |          |     1 |     3 |     2   (0)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR                    |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)              | :TQ10001 |       |       |            |          |  Q1,01 | P->S | QC (RAND)  |
|   3 |    MULTI-TABLE INSERT              |          |       |       |            |          |  Q1,01 | PCWP |            |
|   4 |     PX RECEIVE                     |          |       |       |            |          |  Q1,01 | PCWP |            |
|   5 |      PX SEND ROUND-ROBIN           | :TQ10000 |       |       |            |          |        | S->P | RND-ROBIN  |
|   6 |       VIEW                         |          |     1 |     3 |     2   (0)| 00:00:01 |        |      |            |
|*  7 |        CONNECT BY WITHOUT FILTERING|          |       |       |            |          |        |      |            |
|   8 |         FAST DUAL                  |          |     1 |       |     2   (0)| 00:00:01 |        |      |            |
|   9 |     DIRECT LOAD INTO               | T1       |       |       |            |          |  Q1,01 | PCWP |            |
|  10 |     DIRECT LOAD INTO               | T2       |       |       |            |          |  Q1,01 | PCWP |            |
----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

7 - filter(LEVEL<10000)

Now, there are a series of restrictions on the direct expenses as soon as you start the overlay on the constraints and triggers. See the restrictions in this section: [documentation Oracle 11.2 INSERT | http://download.oracle.com/docs/cd/E11882_01/server.112/e17118/statements_9014.htm#SQLRF01604]

NOTE: Edited to remove the sentence that PARALLEL DML has been the key. It's not as demonstrated in successive messages but still it trigger inserts in DIRECT-path on an INSERT ALL.

Published by: scatmull on August 22, 2011 12:55

Published by: scatmull on August 22, 2011 13:56

Tags: Database

Similar Questions

  • Insert / * + Append * / and direct-path INSERT

    Hi guys

    Insert / * + Append * / hint cause Oracle 10 G using direct-path INSERT?
    and so insert / * + Append * / suspicion causes an Oracle of using direct-path INSERT, insert / * + Append * / is subject to the same restrictions as direct-path, such as "the target table cannot have any triggers or referential integrity constraints defined on it.»



    Thank you

    How it would be difficult for you to look for the answer in the documentation and do not abuse this forum asking questions doc and flaming posters colleagues?

    ------------
    Sybrand Bakker
    Senior Oracle DBA

  • Insert / * + append * / into TableName Select / * + parallel (t, 4, 1) * / * from Tabl

    Hello

    I use
    Insert  /*+ append */   into TableNew select  /*+ parallel( t,4,1) */  * from TableOld t
    to load data into a table through TableNew
    another table of the same structure - TableOld

    More than 4-5 hours it takes me to do this for about 1.5 million lines with a column defined as XMLType as well.

    -Are there any rules to determine the degree of parallelism, or is he just hit and a trial?
    - Or set none and let the optimizer?
    -Any other rule, I keep in mind to try to optimize this operation?

    Thank you...

    Published by: BluShadow on December 6, 2012 11:20
    addition of {noformat}
    {noformat} tags for readability.  Read: {message:id=9360002} and learn to do this yourself                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    

    Ben hassen says:

    user8941550 wrote:
    So I tried,

    Insert / * + append * / into TableNew select * from TableOld t
    and it is now running in an hour.

    No explanation for this.

    I don't recommend this way (insert into... Select..) for large tables.
    Because you have no ability to run a commit when inserting (first at the end of the insertion process that you can commit).
    It takes much more space-cancellation (if you run rollback after the insert statement).

    But, principal in blocks or pieces is not a transactional point of view and can cause more problems later if there is an error in one of the blocks of inserts, because you will not be able to easily restore data that has already been posted.
    I highly recommend to use a single INSERT. SELECT... statement in most cases (it's always going to be rare exceptions), simply to maintain transactional integrity.

    A better way is, define a cursor, read for blocked lines (e.g. 1000 lines), after every 1000 lines run a commit.

    That's a great way to a) slow down the process and b) make it difficult to restore it if a problem occurs.
    Whenever you issue a commit, it tells Oracle to write the data in the data files. Which is performed by the writer processes. By default, the database starts with N number of processes of writer ("N" depends on the configuration of database, so let's assume 4 processes of writer for example). Whenever a commit is issued, a writer process gets allocated to the task of writing the data to the data files as defined. The workload is shared between these processes, so if a further validation is issued and a single process for writer is already busy, then another process will consider this request. If the database starts to do a lot of approvals issued, and concludes that all the existing processes of the writer are busy, it will generate new writer processes to process new requests, taking up more resources (memory, files, and processes handles etc.) server and the writer process more, there is more chance they have of creating States of waiting between them as they all try and write to the data files (same) especially if the data is committed are closely for example same table and associated tablespace etc.). These resources are not released again when the tasks are complete, until the server is restarted or the stop of the Oracle, Oracle keeps them there in waiting, it will get the same kind of workload once again. Oracle is perfectly capable of treating millions of lines, suggesting that it is 'better' to insert into pieces of 1,000 records at a time, having committed after each, not only slows down the process of pl/sql code perspective, but also slows down things from a server resources point of view, as well as causing potential claim to IO file issues... including the possibility of having some i/o contention problems with disks. It is certainly NOT "a better way".

    These suggestions are usually given by people who do not understand the underlying architecture of databases Oracle, something which is taught on the Oracle DBA courses, and it's a course I would recommend that any so-called professional developer attends, even if they did not intend to be a DBA. How works the architecture of a database is probably useful to know how to write good code... not only in the Oracle, but some other RDBMS establishments (when I was a developer of Ingres database, I have followed the courses of Ingres DBA and it was also valuable to understand how to write good code).

  • Insert, append and alter truncate partition table

    Hi all

    My DB is 11.2 Exadata machine. I've done the migration of data from PROD and PROD team says that my DML blocked their DDL. I want to get confirmed here until I have send an email to defend myself.
    My DML is
    insert /*+ append noparallel(t) */ into GPOS_XXX_XXX PARTITION(p3) t 
    SELECT /*+ noparallel(s)*/* FROM  GPOS_XXX_XXX@adw3u_izoom_admin s WHERE srce_sys_id = 1; 
    commit;
    Their DDL resembles
    ALTER TABLE GPOS_XXX_XXX TRUNCATE PARTITION p4
    I did the test and it shows as they were running at the same time very well, busy resource no error will be thrown.

    Don't miss something here, please?

    Best regards
    Leon

    >
    Don't miss something here, please?
    >
    I don't think so. I did a test on 11 GR 2 and there is no conflict. I think that your team of prod is wrong. Ask them to provide a quote to support their application.

    As long as your access direct-path INSERT specifies a particular data then only this partition will be locked.
    See this blog asktom since last year where he addresses a question similar to, but not the same thing as, to yours.
    http://asktom.Oracle.com/pls/Apex/f?p=100:11:0:P11_QUESTION_ID:3580062500346902748
    >
    APPEND it (or parallel direct path load) crashes the segment it targets. But the table lock prevents only the other inserts/changes - NO of QUERIES.

    And also, if you know that you insert in a single partition and you

    Insert / * + append * / into table partition (pname) select...

    Then only THIS partition will be locked.

  • Simple question about Append hint

    ORACLE-BASE - title

    The link above, I have just a basic question about add suspicion...

    How the APPEND Hint affects the performance

    The APPEND tip tells the optimizer to perform an access direct-path insert, which improves the performance of INSERT .. SELECT operations for several reasons:

    • Data are added at the end of the table, instead try to use the free space in the table.
    • Data is written directly to the data files, passing the buffer cache.
    • Referential integrity constraints are ignored.

    I wanted to just understand to what extent is the third correct point

    -----------------

    CREATE TABLE emp
    (
    emp_id a PRIMARY KEY/NUMBER
    emp_name VARCHAR2 (100),
    dept_id NUMBER
    );

    CREATE TABLE dept
    (
    dept_id NUMBER PRIMARY KEY,
    dept_name VARCHAR2 (100)
    );

    ALTER TABLE ADD FOREIGN KEY (Dept_id) emp made REFERENCE to Department (Dept_id);


    INSERT / * + append * /.
    IN emp
    Select 1, 'King', 100 from dual;

    COMMIT;

    The insert will definitely give an error

    ORA-02291: integrity constraint (SCOTT. SYS_C0013725324) violated - parent not found key

    Am I missing something here?

    See you soon,.

    Manik.

    sqlldr can and does ignore referential integrity / triggers and same uniqueness in a load path direct, insert / * + append * / doesn't - you do not direct path when they (constraints) exist.


    parallel is always a direct route, if you go in parallel, you add.

    https://asktom.Oracle.com/pls/asktom/f?p=100:11:0:no:P11_QUESTION_ID:1211797200346279484

  • Insert select on the same table: possible without side effects?

    I have a very large table T1 containing millions of records. I need to treat its lines and create a few new lines based on selection.

    Table T1 contains events and one of them, with the code 100, is created by the further development of other events inside the table.

    My code is as follows:

    insert /*+append */ into T1 (code,...) values (100, c1,c2,...)
    select c1,c2... from T1 where (code=20 or code=10) and <other conditions>...
    
    

    as you can see I'm extract T1 lines to insert again in T1 with a different code and I use the direct path in order to get good performance.

    My fear is: choose is made from the same table I risk data loss? In general it is a good practice? Or is it better to create another table?

    Hello

    No I don't think that there may be loss of data. But that may depend on the behavior of the application and your where clause.

    I will explain how it is treated, so you can see if it's ok:

    1. the table is locked because of the insert add, (as)

    2 lines are read by select and make compatible from the State that was at the beginning of the query - 1.

    3 rows are inserted at the end, after the high-water line

    4 columns for new lines are sorted to be merged in the index

    5. high watermark is adjusted - visible new lines and lock is released

    Note that 2. and 3. occur at the same time: rows are inserted all read.

    Note that anyone can choose in the table during the operation - they see the changes committed only - if the State 1.

    all other DML are waiting for the lock being released, and will see new ranks and then

    If you have things that prevent the direct-path insert, append the hint will be ignored. So, if you must rely on close to 1. then the best lock explicitly with table lock. But I don't think that you need.

    Kind regards

    Franck.

  • Difference in performance between the CTA and INSERT / * + APPEND * / IN

    Hi all

    I have a question about the ETG and "Insert / * + Append * / Into" statements.

    Suite deal, I have a question that I did not understand the difference in operating times EXADATA.

    The two tables of selection (g02_f01 and g02_f02) have not any partition. But I could partition tables with the method of partition by column "ip_id" hash and I tried to run the same query with partition tables. Change anything in execution times.

    I executed plan gather statistics for all tables. The two paintings were 13.176.888 records. The two arrays have same "ip_id' unique columns. I want to combine these tables into a single table.

    First request:

    Insert / * + append parallel (a, 16) * / in dg.tiz_irdm_g02_cc one

    (ip_id, process_date,...)

    Select / * + parallel (a, 16) parallel (16B) * / *.

    tgarstg.tst_irdm_g02_f01 a.,

    tgarstg.tst_irdm_g02_f02 b

    where a.ip_id = b.ip_id


    Elapsed = > 45: 00 minutes


    Second request:

    create table dg.tiz_irdm_g02_cc nologging parallel 16 compress for than query

    Select / * + parallel (a, 16) (b, 16) parallel * / *.

    tgarstg.tst_irdm_g02_f01 a.,

    tgarstg.tst_irdm_g02_f02 b

    where a.ip_id = b.ip_id

    Elapsed = > 04:00 minutes


    Execution plans are:


    1. Enter the statement execution Plan:

    Hash value of plan: 3814019933

    ------------------------------------------------------------------------------------------------------------------------------------------

    | ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time |    TQ | IN-OUT | PQ Distrib.

    ------------------------------------------------------------------------------------------------------------------------------------------

    |   0 | INSERT STATEMENT.                  |    13 M |    36G |       |   127K (1) | 00:00:05 |        |      |            |

    |   1.  LOAD SELECT ACE | TIZ_IRDM_G02_CC |       |       |       |            |          |        |      |            |

    |   2.   COORDINATOR OF PX |                  |       |       |       |            |          |        |      |            |

    |   5:    PX SEND QC (RANDOM). : TQ10002 |    13 M |    36G |       |   127K (1) | 00:00:05 |  Q1, 02 | P > S | QC (RAND) |

    |*  4 |     IN THE BUFFER HASH JOIN |                  |    13 M |    36G |   921 M |   127K (1) | 00:00:05 |  Q1, 02 | SVCP |            |

    |   3:      RECEIVE PX |                  |    13 M |    14G |       |  5732 (5) | 00:00:01 |  Q1, 02 | SVCP |            |

    |   6.       PX SEND HASH | : TQ10000 |    13 M |    14G |       |  5732 (5) | 00:00:01 |  Q1 00 | P > P | HASH |

    |   7.        ITERATOR BLOCK PX |                  |    13 M |    14G |       |  5732 (5) | 00:00:01 |  Q1 00 | ISSUE |            |

    |   8.         STORE TABLE FULL ACCESS | TST_IRDM_G02_F02 |    13 M |    14G |       |  5732 (5) | 00:00:01 |  Q1 00 | SVCP |            |

    |   9.      RECEIVE PX |                  |    13 M |    21G |       | 18353 (3) | 00:00:01 |  Q1, 02 | SVCP |            |

    |  10.       PX SEND HASH | : TQ10001 |    13 M |    21G |       | 18353 (3) | 00:00:01 |  Q1, 01 | P > P | HASH |

    |  11.        ITERATOR BLOCK PX |                  |    13 M |    21G |       | 18353 (3) | 00:00:01 |  Q1, 01 | ISSUE |            |

    |  12.         STORE TABLE FULL ACCESS | TST_IRDM_G02_F01 |    13 M |    21G |       | 18353 (3) | 00:00:01 |  Q1, 01 | SVCP |            |

    ------------------------------------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    4 - access("AIRDM_G02_F01".") IP_ID '= 'AIRDM_G02_F02'.' IP_ID")

    2 - DEC execution Plan:

    Hash value of plan: 3613570869

    ------------------------------------------------------------------------------------------------------------------------------------------

    | ID | Operation | Name | Lines | Bytes | TempSpc | Cost (% CPU). Time |    TQ | IN-OUT | PQ Distrib.

    ------------------------------------------------------------------------------------------------------------------------------------------

    |   0 | CREATE TABLE STATEMENT.                  |    13 M |    36G |       |   397K (1) | 00:00:14 |        |      |            |

    |   1.  COORDINATOR OF PX |                  |       |       |       |            |          |        |      |            |

    |   2.   PX SEND QC (RANDOM). : TQ10002 |    13 M |    36G |       |   255K (1) | 00:00:09 |  Q1, 02 | P > S | QC (RAND) |

    |   3.    LOAD SELECT ACE | TIZ_IRDM_G02_CC |       |       |       |            |          |  Q1, 02 | SVCP |            |

    |*  4 |     HASH JOIN |                  |    13 M |    36G |  1842M |   255K (1) | 00:00:09 |  Q1, 02 | SVCP |            |

    |   5.      RECEIVE PX |                  |    13 M |    14G |       | 11465 (5) | 00:00:01 |  Q1, 02 | SVCP |            |

    |   6.       PX SEND HASH | : TQ10000 |    13 M |    14G |       | 11465 (5) | 00:00:01 |  Q1 00 | P > P | HASH |

    |   7.        ITERATOR BLOCK PX |                  |    13 M |    14G |       | 11465 (5) | 00:00:01 |  Q1 00 | ISSUE |            |

    |   8.         STORE TABLE FULL ACCESS | TST_IRDM_G02_F02 |    13 M |    14G |       | 11465 (5) | 00:00:01 |  Q1 00 | SVCP |            |

    |   9.      RECEIVE PX |                  |    13 M |    21G |       | 36706 (3) | 00:00:02 |  Q1, 02 | SVCP |            |

    |  10.       PX SEND HASH | : TQ10001 |    13 M |    21G |       | 36706 (3) | 00:00:02 |  Q1, 01 | P > P | HASH |

    |  11.        ITERATOR BLOCK PX |                  |    13 M |    21G |       | 36706 (3) | 00:00:02 |  Q1, 01 | ISSUE |            |

    |  12.         STORE TABLE FULL ACCESS | TST_IRDM_G02_F01 |    13 M |    21G |       | 36706 (3) | 00:00:02 |  Q1, 01 | SVCP |            |

    ------------------------------------------------------------------------------------------------------------------------------------------

    Information of predicates (identified by the operation identity card):

    ---------------------------------------------------

    4 - access("AIRDM_G02_F01".") IP_ID '= 'AIRDM_G02_F02'.' IP_ID")

    Oracle version:

    Oracle Database 11 g Enterprise Edition Release 11.2.0.4.0 - 64 bit Production

    PL/SQL Release 11.2.0.4.0 - Production

    CORE Production 11.2.0.4.0

    AMT for Linux: Version 11.2.0.4.0 - Production

    NLSRTL Version 11.2.0.4.0 - Production

    Notice how this additional distribution has disappeared from the non-partitioned table.

    I think that with the partitioned table that oracle has tried to balance the number of slaves against the number of scores he expected to use and decided to distribute the data to get a 'fair sharing' workload, but had not authorized for the side effects of the buffer hash join which was to appear and extra messaging for distribution.

    You could try the indicator pq_distribute() for the insert to tell Oracle that he should not disrtibute like that. for example, based on your original code:

    Insert / * + append pq_distribute parallel (a, 16) (a zero) * / in dg.tiz_irdm_g02_cc one...

    This can give you the performance you want with the partitioned table, but check what it does to the space allocation that it can introduce a large number (16) of extensions by segment that are not completely filled and therefore be rather waste of space.

    Concerning

    Jonathan Lewis

  • I create a form based on two tables that have sequences also. When I create insert only row is inserted in the fields in table first and second fields of the table are empty. Why?

    Mr President.

    I create a form based on two tables that have sequences also. When I create insert only row is inserted in the fields in table first and second fields of the table are empty. Why?

    formdoubletables.png

    the page source is

    <?xml version='1.0' encoding='UTF-8'?>
    <ui:composition xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:af="http://xmlns.oracle.com/adf/faces/rich"
                    xmlns:f="http://java.sun.com/jsf/core">
      <af:panelFormLayout id="pfl1">
        <af:group id="Group">
          <af:inputText value="#{bindings.VoucherId.inputValue}" label="#{bindings.VoucherId.hints.label}"
                        required="#{bindings.VoucherId.hints.mandatory}" columns="#{bindings.VoucherId.hints.displayWidth}"
                        maximumLength="#{bindings.VoucherId.hints.precision}"
                        shortDesc="#{bindings.VoucherId.hints.tooltip}" id="it1">
            <f:validator binding="#{bindings.VoucherId.validator}"/>
            <af:convertNumber groupingUsed="false" pattern="#{bindings.VoucherId.format}"/>
          </af:inputText>
          <af:inputDate value="#{bindings.VoucherDate.inputValue}" label="#{bindings.VoucherDate.hints.label}"
                        required="#{bindings.VoucherDate.hints.mandatory}"
                        columns="#{bindings.VoucherDate.hints.displayWidth}"
                        shortDesc="#{bindings.VoucherDate.hints.tooltip}" id="id1">
            <f:validator binding="#{bindings.VoucherDate.validator}"/>
            <af:convertDateTime pattern="#{bindings.VoucherDate.format}"/>
          </af:inputDate>
          <af:inputText value="#{bindings.Credit.inputValue}" label="#{bindings.Credit.hints.label}"
                        required="#{bindings.Credit.hints.mandatory}" columns="#{bindings.Credit.hints.displayWidth}"
                        maximumLength="#{bindings.Credit.hints.precision}" shortDesc="#{bindings.Credit.hints.tooltip}"
                        id="it2">
            <f:validator binding="#{bindings.Credit.validator}"/>
          </af:inputText>
        </af:group>
        <af:group id="g1">
          <af:inputText value="#{bindings.Lineitem.inputValue}" label="#{bindings.Lineitem.hints.label}"
                        required="#{bindings.Lineitem.hints.mandatory}" columns="#{bindings.Lineitem.hints.displayWidth}"
                        maximumLength="#{bindings.Lineitem.hints.precision}" shortDesc="#{bindings.Lineitem.hints.tooltip}"
                        id="it3">
            <f:validator binding="#{bindings.Lineitem.validator}"/>
            <af:convertNumber groupingUsed="false" pattern="#{bindings.Lineitem.format}"/>
          </af:inputText>
          <af:inputText value="#{bindings.VoucherId1.inputValue}" label="#{bindings.VoucherId1.hints.label}"
                        required="#{bindings.VoucherId1.hints.mandatory}"
                        columns="#{bindings.VoucherId1.hints.displayWidth}"
                        maximumLength="#{bindings.VoucherId1.hints.precision}"
                        shortDesc="#{bindings.VoucherId1.hints.tooltip}" id="it4">
            <f:validator binding="#{bindings.VoucherId1.validator}"/>
            <af:convertNumber groupingUsed="false" pattern="#{bindings.VoucherId1.format}"/>
          </af:inputText>
          <af:inputText value="#{bindings.Debit.inputValue}" label="#{bindings.Debit.hints.label}"
                        required="#{bindings.Debit.hints.mandatory}" columns="#{bindings.Debit.hints.displayWidth}"
                        maximumLength="#{bindings.Debit.hints.precision}" shortDesc="#{bindings.Debit.hints.tooltip}"
                        id="it5">
            <f:validator binding="#{bindings.Debit.validator}"/>
          </af:inputText>
          <af:inputText value="#{bindings.Credit1.inputValue}" label="#{bindings.Credit1.hints.label}"
                        required="#{bindings.Credit1.hints.mandatory}" columns="#{bindings.Credit1.hints.displayWidth}"
                        maximumLength="#{bindings.Credit1.hints.precision}" shortDesc="#{bindings.Credit1.hints.tooltip}"
                        id="it6">
            <f:validator binding="#{bindings.Credit1.validator}"/>
          </af:inputText>
          <af:inputText value="#{bindings.Particulars.inputValue}" label="#{bindings.Particulars.hints.label}"
                        required="#{bindings.Particulars.hints.mandatory}"
                        columns="#{bindings.Particulars.hints.displayWidth}"
                        maximumLength="#{bindings.Particulars.hints.precision}"
                        shortDesc="#{bindings.Particulars.hints.tooltip}" id="it7">
            <f:validator binding="#{bindings.Particulars.validator}"/>
          </af:inputText>
          <af:inputText value="#{bindings.Amount.inputValue}" label="#{bindings.Amount.hints.label}"
                        required="#{bindings.Amount.hints.mandatory}" columns="#{bindings.Amount.hints.displayWidth}"
                        maximumLength="#{bindings.Amount.hints.precision}" shortDesc="#{bindings.Amount.hints.tooltip}"
                        id="it8">
            <f:validator binding="#{bindings.Amount.validator}"/>
            <af:convertNumber groupingUsed="false" pattern="#{bindings.Amount.format}"/>
          </af:inputText>
        </af:group>
        <f:facet name="footer">
          <af:button text="Submit" id="b1"/>
          <af:button actionListener="#{bindings.CreateInsert.execute}" text="CreateInsert"
                     disabled="#{!bindings.CreateInsert.enabled}" id="b2"/>     
          <af:button actionListener="#{bindings.Commit.execute}" text="Commit" disabled="#{!bindings.Commit.enabled}"
                     id="b3"/>
          <af:button actionListener="#{bindings.Rollback.execute}" text="Rollback" disabled="#{!bindings.Rollback.enabled}"
                     immediate="true" id="b4">
            <af:resetActionListener/>
          </af:button>
        </f:facet>
      </af:panelFormLayout>
    </ui:composition>
    
    
    
    

    Concerning

    Go to your VO Wizard, select the tab of the entity and to check if both the EO is editable or not.

    See you soon

    AJ

  • Trigger on a several tables

    IHAVE a requirement like this.but explained in the form of two table emp and Dept.

    I have a table emp table(empno,ename,deptno) and dept (deptno, dname)
    I have a third emp_details (empno, ename, deptno, dname) of table

    I have a procedure that returns the empdetails
    I have a query in th eabove procedure, select * from emp_details; that returns all the details.

    I want to write a trigger of such that,.
    If I insert or update a row in the emp and dept table and then he should get inserted/updated in the table of emp_details also.
    I am able to write triggers ona single table. I can't get the rows inserted or updated when it comes to multiple tables.

    Data:

    create table emp
    (key primary empno number,)
    Ename varchar2 (20).
    DEPTNO number
    )

    create table dept
    (key primary number depno,)
    DNAME varchar2 (20)
    )

    Insert into dept values(10,'hyderabad');
    Insert into dept values(20,'bangalore');

    Insert into emp values(1,'A',10);
    Insert into emp values(2,'B',20);


    CREATE TABLE EMP_DETAILS
    AS
    SELECT E.ENAME, E.EMPNO, E.DEPTNO, D.DNAME
    Of
    EMP E, DEPT. D
    WHERE E.DEPTNO = D.DEPTNO;


    CREATE OR REPLACE PROCEDURE PROC_EMP_DETAILS (P_EMPNO EMP. EMPNO % TYPE OUT, P_ENAME EMP. ENAME OUT TYPE, P_DEPTNO EMP %. DEPTNO % TYPE OUT, P_DNAME MIN. DNAME % OUT TYPE)
    AS
    / * EXAMPLE OF CODE * /.
    BEGIN
    SELECT * FROM EMP_DETAILS;
    END;
    /

    I tried like this

    CREATE OR REPLACE TRIGGER TRIG_EMP_DETAILS
    AFTER INSERT OR UPDATE ON EMP
    FOR EACH LINE
    REFERENCING OLD AS OLD AGAIN AS NEW
    BEGIN
    INSERT INTO EMP_DETAILS VALUES(:NEW.) EMPNO,: NEW. ENAME,: NEW. DEPTNO,?)

    ???? -As a column, we get dept table how to represent that?

    Could you please guide me in this regard?



    Thanks in advance
    KVB

    Trigger on a several tables

    I think that you don't need triggers on several tables: one is enough:

    CREATE OR REPLACE TRIGGER trig_emp_details
      AFTER INSERT OR UPDATE
      ON emp
      FOR EACH ROW
    BEGIN
      INSERT INTO emp_details
        SELECT :new.empno, :new.ename, :new.deptno, dname
          FROM dept
         WHERE deptno = :new.deptno;
    END;
    
  • Mergre in several tables

    Hello! Is it possible to MERGE INTO several tables under certain conditions? It's that if the condition is satisfied the data is intended to a single table, if not the other.

    You can not. Only inserts can be multi-table.

  • ORA-22805 when you attempt to insert validated xml in a table based on a schema

    Hello

    I can't for the life of understand me why I get the error "Unable to insert NULL object in the tables of objects or nested tables" when you attempt to insert a xml file that I have validated against the schema that I registered with several validators. Here is the schema and file. I tried to use dummy data in empty elements. I tried to to escape apostrophes (& apos ;) that surrounds my regex and the values in the file. In addition, I don't know what could be NULL.

    Schema:


    <? XML version = "1.0" encoding = "utf-8"? >
    < xs: Schema elementFormDefault = "qualified".
    xmlns: XS = "http://www.w3.org/2001/XMLSchema".
    targetNamespace = "http://www.trxi.com/schemas/ImportSchema."
    xmlns = "http://www.trxi.com/schemas/ImportSchema" >

    < xs:simpleType name = "hexValue" >
    < xs:restriction base = "XS: String" >
    [< value="0x'[0-9a-fA-F][0-9a-fA-F pattern]" "/ >"
    < / xs:restriction >
    < / xs:simpleType >

    < xs: element name = "ImportTemplate" >
    < xs: complexType >
    < xs: SEQUENCE >
    < xs: element name = "DefaultDirectory" type = "xs: String" / >
    < xs: element name = "TableName" type = "xs: String" / >
    < xs: element name = "RecordDelimiter" type = "hexValue" / >
    < xs: element name = "Parameters FieldDelimiter" type = "hexValue" / >
    < xs: element name = "ColumnMapping" >
    < xs: complexType >
    < xs: SEQUENCE >
    < xs: element name = "Column" maxOccurs = "unbounded" >
    < xs: complexType >
    < xs: SEQUENCE >
    < xs: element name = "ColumnName" type = "xs: String" / >
    < xs: element name = "Datatype" >
    < xs:simpleType >
    < xs:restriction base = "XS: String" >
    < xs:enumeration value = "VARCHAR2" / >
    < xs:enumeration value = 'NUMBER' / >
    < xs:enumeration value = "DATE" / >
    < / xs:restriction >
    < / xs:simpleType >
    < / xs: element >
    < xs: element name = "DestinationTable" type = "xs: String" / >
    < xs: element name = "DestinationColumn" type = "xs: String" / >
    < / xs: SEQUENCE >
    < / xs: complexType >
    < / xs: element >
    < / xs: SEQUENCE >
    < / xs: complexType >
    < / xs: element >
    < / xs: SEQUENCE >
    < / xs: complexType >
    < / xs: element >
    < / xs: Schema >


    File:

    < ImportTemplate >
    < DefaultDirectory > aq_import < / DefaultDirectory >
    catalog_export < TableName > < / TableName >
    < > 0 x '0A' RecordDelimiter < / RecordDelimiter >
    < > 0 x '09' FieldDelimiter < / settings FieldDelimiter >
    < ColumnMapping >
    < column >
    model_number < ColumnName > < / ColumnName >
    VARCHAR2 < Datatype > < / Datatype >
    < DestinationTable > < / DestinationTable >
    < DestinationColumn > < / DestinationColumn >
    < / column >
    < column >
    user_stock_model_number < ColumnName > < / ColumnName >
    VARCHAR2 < Datatype > < / Datatype >
    < DestinationTable > < / DestinationTable >
    < DestinationColumn > < / DestinationColumn >
    < / column >
    < column >
    vendor_number < ColumnName > < / ColumnName >
    VARCHAR2 < Datatype > < / Datatype >
    < DestinationTable > < / DestinationTable >
    < DestinationColumn > < / DestinationColumn >
    < / column >
    < column >
    specification of < ColumnName > < / ColumnName >
    VARCHAR2 < Datatype > < / Datatype >
    < DestinationTable > < / DestinationTable >
    < DestinationColumn > < / DestinationColumn >
    < / column >
    < column >
    vendor_pack < ColumnName > < / ColumnName >
    VARCHAR2 < Datatype > < / Datatype >
    < DestinationTable > < / DestinationTable >
    < DestinationColumn > < / DestinationColumn >
    < / column >
    < column >
    selling_unit < ColumnName > < / ColumnName >
    VARCHAR2 < Datatype > < / Datatype >
    < DestinationTable > < / DestinationTable >
    < DestinationColumn > < / DestinationColumn >
    < / column >
    < column >
    weight < ColumnName > < / ColumnName >
    < Datatype > NUMBER < / Datatype >
    < DestinationTable > < / DestinationTable >
    < DestinationColumn > < / DestinationColumn >
    < / column >
    < column >
    width of < ColumnName > < / ColumnName >
    < Datatype > NUMBER < / Datatype >
    < DestinationTable > < / DestinationTable >
    < DestinationColumn > < / DestinationColumn >
    < / column >
    < column >
    depth of < ColumnName > < / ColumnName >
    < Datatype > NUMBER < / Datatype >
    < DestinationTable > < / DestinationTable >
    < DestinationColumn > < / DestinationColumn >
    < / column >
    < column >
    deal_net < ColumnName > < / ColumnName >
    VARCHAR2 < Datatype > < / Datatype >
    < DestinationTable > < / DestinationTable >
    < DestinationColumn > < / DestinationColumn >
    < / column >
    < column >
    picture_name < ColumnName > < / ColumnName >
    VARCHAR2 < Datatype > < / Datatype >
    < DestinationTable > < / DestinationTable >
    < DestinationColumn > < / DestinationColumn >
    < / column >
    < column >
    blank_column < ColumnName > < / ColumnName >
    VARCHAR2 < Datatype > < / Datatype >
    < DestinationTable > < / DestinationTable >
    < DestinationColumn > < / DestinationColumn >
    < / column >
    < column >
    vendor_to_stock < ColumnName > < / ColumnName >
    VARCHAR2 < Datatype > < / Datatype >
    < DestinationTable > < / DestinationTable >
    < DestinationColumn > < / DestinationColumn >
    < / column >
    < column >
    priced_by < ColumnName > < / ColumnName >
    VARCHAR2 < Datatype > < / Datatype >
    < DestinationTable > < / DestinationTable >
    < DestinationColumn > < / DestinationColumn >
    < / column >
    < column >
    category < ColumnName > < / ColumnName >
    VARCHAR2 < Datatype > < / Datatype >
    < DestinationTable > < / DestinationTable >
    < DestinationColumn > < / DestinationColumn >
    < / column >
    < column >
    vendor_nickname < ColumnName > < / ColumnName >
    VARCHAR2 < Datatype > < / Datatype >
    < DestinationTable > < / DestinationTable >
    < DestinationColumn > < / DestinationColumn >
    < / column >
    < column >
    user_vendor_name < ColumnName > < / ColumnName >
    VARCHAR2 < Datatype > < / Datatype >
    < DestinationTable > < / DestinationTable >
    < DestinationColumn > < / DestinationColumn >
    < / column >
    < column >
    < ColumnName > configurable < / ColumnName >
    VARCHAR2 < Datatype > < / Datatype >
    < DestinationTable > < / DestinationTable >
    < DestinationColumn > < / DestinationColumn >
    < / column >
    < / ColumnMapping >
    < / ImportTemplate >
  • Performance in the treatment of the based on a game, several tables from target

    Welcome.

    I have question about a treatment based on a game, when mapping have several tables in the target. I noticed that OWB generate SQL code that usually build a query for each table in the target insertion. Suggest that each table has results from different stages of the treatment, so multi table insert cannot be used. Looking for generated code PL/SQL, I feel that each insert query managed independently and so each make analyses of source table and joins on its own.
    To make my question more concrete, I will introduce two simple examples of stream ETL:
    1) start-> (table scan)--> (joins)--> (inserting into the table t1)
    2) start-> (table scan)--> (joins)-> two targets: (insert into table t1)
    -> two targets: (deduplicator)--> (insertion in table t2)
    Admit, that scans and joins are very expensive comparing to insert rows. Thus, it is usually, if oracle performs scans of tables 2 and joined in example 2) and example 2) take twice longer than example 1)?
    Or fact Oracle is so smart that it can cache the result of entering the first query and use it again in the second query?

    Best regards
    Pawel

    Hi Pawel,

    Thus, it is usually, if oracle performs scans of tables 2 and joined in example 2) and example 2) take twice longer than example 1)?

    Yes, you are right

    Or fact Oracle is so smart that it can cache the result of entering the first query and use it again in the second query?

    Nor the database Oracle or OWB don't is not to intermediate capabilities query result caching.
    While the Oracle database feature "result cache queries", but it must match exactly to SQL source and it store only the final query result...

    Kind regards
    Oleg

  • Add several tables on a Page

    Hello

    I am building an application that handles (APEX 4.0) material inventory. I have an entry page that adds data to two or more tables at once. The page has two forms on it pointing to two separate tables. However when I try to run the page, it fails and returns an error:

    ORA-06550: line 1, column 437: PL/SQL: ORA-00904: "ORH_LAST_UPDATE_DATE": invalid identifier ORA-06550: line 1, column 7: PL/SQL: statement ignored
    Error failed to process row in the IDD_ID_DATA table.

    As far as I can see in the application, this column is not in question (I don't even do anything for her and she is nullable). I looked at the application itself as well as some research online but can't find anything useful...


    So my question is this: is it possible to add several tables to a page? If yes how?

    I'm new to APEX so any help would be greatly appreciated!

    UPDATE:

    I received an email from support of APEX team:

    «The answer is simple, it's you will need manually code processes query (and DML) If you want to maintain multiple tables on a page (there is a limit of a table when you use the integrated process).»
    For this I suggest that you remove the process generated by the wizards and create processes of PL/SQL with insert, update, delete statements as needed. "This encoding is not difficult but takes much longer when you can use the integrated process.

    I've been playing with code PL/SQL and the final result is the following:

    Start
    INSERT INTO table1
    VALUES)
    : P2_Item_Field1,.
    (: P2_Item_Field2);
              
    INSERT INTO table2
    VALUES)
    : P2_Item_Field1,.
    (: P2_Item_Field2);
    end;

    I used this code in a PL/SQL custom in the treatment process > section to the Page processing and it seems to work fine now. The only downside to this method is that if the name of a Page element is changed the code will also have to be changed. Except that I had no problem.

  • Creating a combined chronology based on several chronologies in several tables

    Hello
    I need to extract a timeline for a customer based on valid_from and valid_to dates in several tables.
    For example: I have a table named customers with an id, a valid_from a valid_to date and a table named contracts with contrat_name and customer_id and valid_from valid_to:

    CUSTOMERS:
    ID | VALID_FROM | VALID_TO
    1. 01.03.2010 | 01.01.4000

    CONTRACTS:
    CONTRACT_NAME | CUSTOMER_ID | VALID_FROM | VALID_TO
    Contracted | 1. 01.03.2010 | 01.10.2010
    ContractB | 1. 01.10.2010 | 01.01.4000

    The following statement would now give me the correct chronology:
    Select customer cus.id, con.contract_name, greatest(cus.valid_from,con.valid_from) least(cus.valid_to,con.valid_to) valid_to valid_from
    customers cus
    stupid contracts on cus.id = con.customer_id inner join;

    CUSTOMER | CONTRACT | VALID_FROM | VALID_TO
    1. Contracted | 01.03.2010 | 01.10.2010
    1. ContractB | 01.10.2010 | 01.01.4000

    It works, but I get a problem whenever I have a point from the time there is no contract for a client, but I would always like to have these periods in my calendar:

    Suppose the following data and the same select statement:

    CUSTOMERS:
    ID | VALID_FROM | VALID_TO
    1. 01.03.2010 | 01.01.4000

    CONTRACTS:
    CONTRACT_NAME | CUSTOMER_ID | VALID_FROM | VALID_TO
    Contracted | 1. 01.05.2010 | 01.10.2010
    ContractB | 1. 01.12.2010 | 01.03.2011

    What I would now get would be:
    CUSTOMER | CONTRACT | VALID_FROM | VALID_TO
    1. Contracted | 01.05.2010 | 01.10.2010
    1. ContractB | 01.12.2010 | 01.03.2011

    But what I get is the following:
    CUSTOMER | CONTRACT | VALID_FROM | VALID_TO
    1. null | 01.03.2010 | 01.05.2010
    1. Contracted | 01.05.2010 | 01.10.2010
    1. null | 01.10.2010 | 01.12.2010
    1. ContractB | 01.12.2010 | 01.03.2011
    1. null | 01.03.2011 | 01.01.4000

    What I won't is generate a result with contract = null any time, there is no contract because I actually want to join the chronology of several different tables into one and it would be very complicated to assume things based on what data can or can not be found in a specific table.

    Thanks for any help or ideas,
    Kind regards
    Thomas

    Hi, Thomas,.

    Whenever you have a problem, after the sample data in a form that people can use to recreate the problem and test their ideas. CREATE TABLE and INSERT statements are great.
    For example:

    CREATE TABLE     customers
    (     customer_id     NUMBER (6)     PRIMARY KEY
    ,     valid_from     DATE          NOT NULL
    ,     valid_to     DATE          NOT NULL
    );
    
    INSERT INTO customers (customer_id, valid_from, valid_to) VALUES (1, DATE '2010-03-01', DATE '4000-01-01');
    INSERT INTO customers (customer_id, valid_from, valid_to) VALUES (2, DATE '2010-03-01', DATE '4000-01-01');
    
    CREATE TABLE     contracts
    (     contract_name     VARCHAR2 (15)     NOT NULL
    ,     customer_id     NUMBER (6)
    ,     valid_from     DATE          NOT NULL
    ,     valid_to     DATE          NOT NULL
    );
    
    INSERT INTO contracts (contract_name, customer_id, valid_from,        valid_to)
         VALUES            ('Contract 1a', 1,           DATE '2010-03-01', DATE '2010-10-01');
    INSERT INTO contracts (contract_name, customer_id, valid_from,        valid_to)
         VALUES            ('Contract 1b', 1,           DATE '2010-10-01', DATE '4000-01-01');
    
    INSERT INTO contracts (contract_name, customer_id, valid_from,        valid_to)
         VALUES            ('Contract 2a', 2,           DATE '2010-05-01', DATE '2010-10-01');
    INSERT INTO contracts (contract_name, customer_id, valid_from,        valid_to)
         VALUES            ('Contract 2b', 2,           DATE '2010-12-01', DATE '2011-03-01');
    

    If a customer contracts of n, then you might need as much as 2n + lines 1 output for this client:
    (a) 1 row showing the period, not from the date of customers both valid_from 1st contract begins
    (b) n lines indicating the periods under contracts
    (c) n lines indicating the period right after each contract, until the following starts (or valid_to of the client, if there is no next contract)
    However, you will have lines for periods not contacted ((a) or (c)) if their mandate is 0 days.

    This sounds like a job for the UNION. Make a UNION 3 lanes in a subquery to generate (a), (b), and (c), and then put a WHERE clause on the combined, results to filter the lines not length 0.
    In the query subsidiary UNION, (a) can be done with a query GROUP BY, and (c) can be done using the analytical function of LEAD.

    WITH     union_data     AS
    (
         SELECT       ua.customer_id
         ,       NULL               AS contract_name
         ,       MIN (ua.valid_from)     AS valid_from
         ,       MIN (oa.valid_from)     AS valid_to
         FROM       customers     ua
         JOIN       contracts     oa     ON     ua.customer_id     = oa.customer_id
         GROUP BY  ua.customer_id
         --
        UNION ALL
         --
         SELECT       customer_id
         ,       contract_name
         ,       valid_from
         ,       valid_to
         FROM       contracts
         --
        UNION ALL
         --
         SELECT       uc.customer_id
         ,       NULL               AS contract_name
         ,       oc.valid_to          AS valid_from
         ,       LEAD ( oc.valid_from
                     , 1
                     , uc.valid_to
                     ) OVER ( PARTITION BY  uc.customer_id
                        ORDER BY      oc.valid_from
                         )          AS valid_to
         FROM       customers     uc
         JOIN       contracts     oc     ON     uc.customer_id     = oc.customer_id
    )
    SELECT       *
    FROM       union_data
    WHERE       contract_name     IS NOT NULL
    OR       valid_from     < valid_to
    ORDER BY  customer_id
    ,       valid_from
    ;
    

    Output:

    CUSTOMER_ID CONTRACT_NAME   VALID_FROM VALID_TO
    ----------- --------------- ---------- ----------
              1 Contract 1a     01.03.2010 01.10.2010
              1 Contract 1b     01.10.2010 01.01.4000
    
              2                 01.03.2010 01.05.2010
              2 Contract 2a     01.05.2010 01.10.2010
              2                 01.10.2010 01.12.2010
              2 Contract 2b     01.12.2010 01.03.2011
              2                 01.03.2011 01.01.4000
    

    Published by: Frank Kulash, April 20, 2011 06:51
    Examples added

  • Question about parallel hint and 'alter table enable parallel DML'

    Hi all

    I have a DML as follows:

    Insert / * + append * / into table1
    Select *.
    of COMPLEX_VIEW;

    Here complex_view contains a very complicated SQL, in which there is some heavy tables joins, subqueries, and aggregations.

    Question 1:

    Let's assume that the underlying tables have no attribute "parallel." Where should I add "parallel index" to force it to be run in parallel and can get better performance?

    Some members think that what follows is good.

    Insert / * + append * / into table1
    Select / * + parallel (a 4) * / *.
    of COMPLEX_VIEW;

    But I think that indicators must be put in the defintion of the complex view where they should be and do not put advice to the main insert DML, like this:

    Insert / * + append * / into table1
    Select *.
    of COMPLEX_VIEW; -I added the indicators in the COMPLEX_VIEW.

    What is your opinion?

    Quesion2:
    Without ' alter session enable parallel DML ", I can see the parallel session in v$ px_session thus." And the execution time has been shortened. This proves without this statement, the DML is also run in parallel.

    So, what is the effect of this statement?

    Best regards
    Leon

    I prefer the suspicion out of the COMPLEX_VIEW. This way, only this application forces the suspicion. If you put the indicator in the COMPLEX_VIEW, any other query on COMPLEX_VIEW (or Assembly of COMPLEX_VIEW to another view or a table) would also "encode" indicator in its execution. You don't then isolation parallel query to only where it is needed.

    If you put the parallel indicator in SELECT it (or view), the query is parallelized. This does not necessarily mean that the INSERT is parallelized. What you see v$ px_session are only slaves to PQ to SELECT.
    You must ALTER SESSION ACTIVATE PARALLEL DML and add the PARALLEL indicator in the INSERT.

    Hemant K Collette

Maybe you are looking for