Logging what data inserted in the outer table

Hello

We load a large number of data using external tables and inserts into tables partitioned by day nor the hour.

Other parts of the system want to know what partitions have been updated and when, so before load us all the data, we make a
insert into new_data(date_of_event) values (select distinct(trunc(date_of_event) from external_table)
That's when inserting into partitions of day, IE get all separate days of that specific file.

These files, however, can be very substantial and after this first analysis, we do a full insertion in to the correct data table, that is, read the entire file again.

So my question is, is there another (better) way to retrieve the different days/times? A way to make the effective insertion in the table of data and information on which partitions Oracle inserted data into any dictionary etc.?

Haven't tried creating a 'good' temp-table, insert the data into it, extraction of dates and then inserting to the actual data table. But then, what is the best, creating a table and a fall or read the file twice?

Thanks in advance

You don't say how you are querying and inserting, but if you have a control on a few possibilities come to mind.

1. use INSERT ALL make a multi-table insert. Insert the "date_of_event" into a new "Journal" table and insert data to the table that you insert now.
Then you can make the separate (trunc (date_of_event) "select" on the new table of newspaper. Just truncate the new log table before each query or add a column that contains the file that was imported.

2. create a VIEW that includes all the columns in your current query and another copy of "date_of_event" and peel the date of the event in another table.

Option #1 is the simplest.

See ALL INSERT in SQL Reference: http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9014.htm

Tags: Database

Similar Questions

  • Rows from the outer Table shows only not all data.

    Hello

    I have a line to 80 characters that I import in an external table.

    80% imports fine line data, but some lines are cut.

    The bytes in the file are as follows.
    ABCABC2334 0000001000010000001000000001 000000 00001C A002

    Bytes in an external table.
    ABCABC2334 0000001000010000001000000001 000000 A002

    The bytes in the row of the outer table stop somewhere at the end of 000000 and the 00001C is cut.

    What build be the cause of this?

    I am able to read the characters at the beginning and towards the end of the record of 80-character line.

    The external file below performs the following operations.
    ABCABC2334 0000001000010000001000000001 000000 01B A002

    I can even make a definition of the external table (c1 char (1), c2 char (1),... c80 (1) tank and all the characters see fine in the specified columns.)

    Here is the last definition of the external table. The "middle" column still shows this behavior. Basically, it is in the file and can be seen with every character in the definined line, but not as a group of characters.

    DB CHARACTERSET WEBISO8859P1

    CREATE TABLE EXT_PROJ_1
    (
    Field1 tank (6 BYTES),
    Field2 float (4 BYTES),
    medium (67 BYTES)
    field3 tank (3 bytes),
    CR tank (2 bytes)
    )
    EXTERNAL ORGANIZATION
    (TYPE ORACLE_LOADER
    THE DEFAULT DIRECTORY EXP_DIR
    ACCESS SETTINGS
    (
    RECORDS delimited by '\r\n '.
    FIELDS (field1, position(1:6),
    position(7:10) Field2.
    average position(11:77)
    field3 position(78:80).
    CR position(81:82)
    )
    )
    LOCATION (EXP_DIR: 'ext_proj_1.txt')
    )
    REJECT THE LIMIT 1
    noPARALLEL
    NOMONITORING;

    Published by: 917320 on March 13, 2012 09:07

    Looking at your table definition:

    field1 char(6 BYTE),
    field2 char(4 BYTE),
    middle char(67 BYTE),
    field3 char(3 byte),
    cr char(2 byte)
    

    column in which you will store a string of 80 bytes?

    BTW: You said "import into an external table." You import FROM an external table or EXPORT to an external table?

  • Rows from the outer table are cached?

    Hi all

    Can anyone confirm if the lines of the outer table are cached in memory? If so, how "redirect" Oracle query next 'return' in the data external when the latter is updated or modified file? Thanks for the responses,

    Kind regards

    Kevin.

    KevinFitz wrote:
    Hi all

    Can anyone confirm if the lines of the outer table are cached in memory? If so, how "redirect" Oracle query next 'return' in the data external when the latter is updated or modified file? Thanks for the responses,

    Kind regards

    Kevin.

    The hard drive itself will be hidden data in OS operations.
    Oracle hidden data in external tables, because it knows that it is an external data source and can change without knowing.

  • How to jump a line to insert in the staging table

    Hello world

    I'm actually transform data from a source table in the staging table and staged, and then at the final table. I generated a primary key using the sequence. As I put the insert method of the staging table as truncate/insert. So whenever the mapping is loaded, intermediate table is truncated and new data are inserted but as I am with sequence of intermediate table, it will give the new numbering of old data from the source table and it will be duplicated data in the target table. So for this reason I use key look up on top of some attributes of entry and that the use of expression that I try to avoid duplication. At each exit of the attributes in the expression, I'm trying the case statement

    "BOLD" CASE WHEN INGRP1. ROW_ID IS NULL
    THEN
    INGRP1.ID
    END * bold *.

    Because of this condition, I get the error message

    "BOLD"
    Warning
    ORA-01400: cannot insert NULL into ('SCOTT'. "" "" STG_TARGET_TABLE '. "" ROW_ID")
    "BOLD"

    But I'm stuck when the row_id value is zero, that that condition or statement should I write to jump the insertion of data. I want to insert data only when ROW_ID IS NULL.




    Kindly help me.

    Thank you

    Concerning
    Suhail Dayer

    You do not need identical tables to use LESS, only the 'select list' must match. Assuming you have the key of the enterprise (one or more columns that uniquely identifies a row of your data source) in the source and the final table, you can do the following:

    -Use a Set operation where the result is the key to the business of staging table LESS the key to the business of the final table
    -The output of the set operation is then joined to the staging table to get the rest of the attributes for these lines
    -The output of the join is inserted into the final table

    This will ensure that the lines with the new keys to the company are responsible.

    Hope this helps,
    Roald

  • Read data from table of $ E and insert in the staging table

    Hi all

    I'm new on ODI. I need your help to understand how to read data from a table ' E$ "and insert in an intermediate table.

    Scenario:

    The name of two columns, in a flat file, the employee and the employee id must be loaded into a data EMPstore +. A check constraint is added so that the data with the employee names in capital letters only to load in the data store. Check the command is set to the static movement . Right-click on the data store, select control , then check. The lines that have violated the check constraint are kept in E$ _EMP+ table.

    Problem:

    Problem is I want to read the data in the table E$ _EMP+ and transform in capital letters in the name of the employee and move the corrected data of E$ _EMP+ EMP+. Please advise me on how to automatically manage the 'soft' exceptions in ODI.

    Thank you

    If I understand, you want to change the columns in the tables of $ E and then load into the target.

    Now, if you notice how ODI recycles the error, there is an incremental update to the target using the E table $ after he filled the I$ table.

    I think you can do the same thing by creating an interface using the table of $ E as source and implement the business logic in this interface to fill the target.

  • Data does not get inserted into the Hxt_Add_Assign_Info_F table

    Hello

    Can tell me regarding when the data is inserted into the Hxt_Add_Assign_Info_Ftable.
    Lately this table not get updates.

    Thank you
    Bitr

    When you enter the assignment of time information, this table is filled (responsibility of the OTL Application Developer > OTL time accounting > assignment of time information)

  • 2 blocks from the database data insert into the same base table?

    Hello
    I have the C_NEW canvas. On this canvas, elements of 3 blocks are usable by the user.

    1 block NEW_HEAD: 3 Articles say X 1, X 2, X 3
    2 block NEW: record multi say Y1 thru Y10 text fields. Also a scroll bar.
    3 block NEW_ACTION: 6 buttons say B1 to B6 (one of them is N_OK which matches the element in question)

    two NEW, NEW_HEAD blocks are blocks of db and have the same basic so-called BT table when users click on the N_OK (after filling the data in the NEW_HEAD and then NEW block in that order), I need the data between the two the NEW, NEW_HEAD to enter the BT. currently only the NEW data goes to BT. in the BT table fields that correspond to the fields X 1, X 2, X 3 in the NEW_HEAD remains null after clicking on the N_OK button. I put commit_form in the N_OK code, since the blocks are blocks of db (as suggested by the people, it is easier to issue a commit_form that a lot more work in writing my own SQL).

    How can I achieve this?

    Thank you
    Chiru

    There is therefore no N_LC_DMG_ALLOW column in the table, that the block is the result. You must set the property "Section of the database" not for all the elements that are not of the columns in your database table.

    And when an error occurs, then the full treatment stops, so its clear that there is no inserted record.

  • Select values from the db1 table and insert into the DB2 table

    Hello

    I have three databases oracle running in three different machines. their ip address is different. among the DB can access databases. (means am able to select values and insert values into tables individually.)

    I need to extract data from the DB1 table (ip say DB1 is 10.10.10.10 and the user is DB1user and the table is DB1user_table) and insert the values into DB2 table (say ip DB2 is 11.11.11.11 and the user is DB2user and table DB2user_table) of DB3 that is to have access to the two IPs DB.

    How do I do this

    Edited by: Aemunathan on February 10, 2010 23:12

    Depending on the amount of data must be moved between DB1 and DB2, and the frequency at which this should happen, you might consider the SQL * COPY more control. I think it's very useful for one-off tasks little, so I can live within its limits of the data type. More http://download.oracle.com/docs/cd/E11882_01/server.112/e10823/apb.htm#i641251.

    Change some parameter of sqlplus session are almost mandatory in order to get decent transfer rates. Tuning ARRAYSIZE and COPYCOMMIT can make a huge difference in flow. LONG change may be necessary, too, depending on your data. The documentation offers these notes on use:

    To activate the copy of data between Oracle and databases non-Oracle, NUMBER of columns is replaced by DECIMAL columns in the destination table. Therefore, if you are copying between Oracle databases, a NUMBER column with no precision will become a DECIMAL column (38). When copying between Oracle databases, you must use SQL commands (CREATE TABLE AS and INSERTION), or you must make sure that your columns have a specified precision.

    SQL * the VALUE LONGER variable limits the length of the LONG column you are copying. If all LONG columns contain data exceeds the value of LONG, COPY truncates the data.

    SQL * Plus performs a validation at the end of each successful COPY. If you set the SQL * variable more COPYCOMMIT DEFINED to a value positive n, SQL * Plus performs a validation after copying all lots n of records. The SQL * Plus ARRAYSIZE variable SET determines the size of a batch.

    Some operating environments require that the service names be placed between double quotes.

    From a SQL * Plus term on DB3, can resemble the command to move all content from my_table in DB1 to the same table in DB2

    COPY from user1/pass1@DB1 to user2/pass2@DB2 -
    INSERT INTO my_table -
    USING select * from my_table
    

    Note the SQL code * more line-continuation character ' - '. It is used to escape the newline character in a SQL * Plus command if you do not have to type all on one line. I use it all the time with this command, but I can't locate the documentation on that right now. Maybe someone else can put their finger on it.

    There are other ways to accomplish what the command copy and it is not without its quirks and limitations, but I find that there is usefulness in an Oracle Toolbox.

  • BULL, INSERTION IN THE PARENT TABLE AND THE CHILD TABLE

    Hi all

    We use bulk insert to improve performance. How to use this concept when I insert the records parent in table parent and child?

    example... I have a procedure that accepts the array of student objects

    each student object is defined as *(student name, student age, course_object_array) *.

    the 3rd element course_object_array is defined as *(coursename,course fee,facultyname) *.

    a student can be associated with several courses

    Now, I have to insert the data into two tables namely students (studentname, studentage and student_number)
    courses (course_id, coursename, fees, facultlyname, student_number)

    I use sequences to fill the columns student_number and course_id.

    If I use bulk insert, and insert all the records in the parent table how do I the association for each record of a child. How will I know what child folder must be associated with which of the parents?

    Concerning
    REDA

    raj_fresher wrote:
    Hi thanks for the reply Solomon.

    I actually know about the bulk collect and for all...

    Have you read that FOR ALL with in fact BULK COLLECT? It will allow you to bulk insert via for ALL with VOTING student_numbers attributed to each student placed in a collection. Then, you will use this collection collection of course list to fill the RACE table.

    SY.

  • insert into the summary table of the table.

    I was wondering if there is a smart way to do it with SQL, but I'm not sure. I would like to consult you yo guys.

    I have a table like this
     CREATE TABLE "TEMPLE_FINANCE"."TEST" 
       (     "COLUMN1" VARCHAR2(10 BYTE), 
         "COLUMN2" VARCHAR2(10 BYTE), 
         "COLUMN3" VARCHAR2(10 BYTE), 
         "COLUMN4" VARCHAR2(10 BYTE)
       ) ;
    
    
    Insert into TEST (COLUMN1,COLUMN2,COLUMN3,COLUMN4) values ('1','2','50.00',null);
    Insert into TEST (COLUMN1,COLUMN2,COLUMN3,COLUMN4) values ('1','2','50.00',null);
    I would like to at the rate of an update and you end up with
    "COLUMN1"     "COLUMN2"     "COLUMN3"     "COLUMN4"
    "1"                  "2"          "100.00"     
    What update statement can run for this?
    I would enter in the summary of the lines based on the column 1 and column2 and get rid of repeative lines.
    I have try this insert statement, but it's bascially just adding an extra record in the table.
    INSERT INTO  TEST (SELECT COLUMN1, COLUMN2, SUM(COLUMN3), COLUMN4 FROM TEST GROUP BY COLUMN1, COLUMN2,COLUMN4);
    any help would be grateful.

    Published by: mlov83 on January 25, 2013 12:45

    Published by: mlov83 on January 25, 2013 13:03

    Hello

    I can't help but wonder if you have the best design of table for what it is, you need to do.

    The best solution would be to create a new table, using the SUM (TO_NUMBER (Column3)) and GROUP BY, and then delete the original table and rename a new one to the old name. While you're at it, change Column3 as a NUMBER and add a primary key.

    In collaboration with just the existing table, INSERT one won't work. INSERT always adds new lines. You want something that can INSERT new lines (or update some existing routes) and DELETE lines at the same time. FUSION can do it all. Here's a way to use the MERGE:

    MERGE INTO     test     dst
    USING   (
              SELECT column1, column2, column4
              ,      SUM (TO_NUMBER (column3))
                                 OVER (PARTITION BY  column1, column2, column4)
                                AS column3_total
              ,      ROWID             AS r_id
              ,      MIN (ROWID) OVER (PARTITION BY  column1, column2, column4)
                                          AS min_r_id
              FROM    test
         )          src
    ON     (src.r_id     = dst.ROWID)
    WHEN MATCHED THEN UPDATE
    SET     dst.column3       = TO_CHAR ( src.column3_total
                                , 'FM9999999.00'
                            )
    DELETE
    WHERE     src.r_id     != src.min_r_id
    ;
    

    Published by: Frank Kulash on January 25, 2013 16:07

  • Record is not inserted into the transparent Table Forms 10g

    Hi all

    I have the built in 10g (10.1.2.0.2) form.

    Basically, the form has 2 blocks.
    1 block with a single element, where we enter a value and press on enter (this place you block2 and run the query).
    Block 2 (tabular) will get the records. This block2 have now 3 columns (caseid, userid, date).

    Now when I insert a new record, I just need to get the caseid only. And username and date must be filled in automatically.
    I can fill fields username and the DATE.
    And when I enter a value element of the block 2 caseid and then Control + S(to insert the record and Save the transaction),.
    I get the message saying FRM-40400: transactions: 1 applied and saved records.
    But when I ask again for the same thing, I do not have is the record inserted into the table.
    Why is this happening?

    Help please...

    Drop the trigger for INSERTION WE (do not comment code or write NULL) and move your code to INSERT before trying.

    -Clément

  • delete all, then insert into the target table, two steps in one transaction?

    Hello

    We have the following two steps in one ODI package, it will be managed in a single transaction?

    procedure step 1): remove all of the target table.
    interface in step 2): insert data in the source table in the target table.

    our problem is that step 2 can take some minutes, and then the target table can temporary unusable for end users who try to access it, I'm good?

    Kind regards
    Marijo

    Hello

    It can be managed in a single transaction by selecting IKM as the Append and TRUNCATE/DELETEALL Option command in the FLOW of an interface tab.

    Thanxs
    Malezieux

  • Why multi-threaded INSERT on the same table cannot work in a single connection

    Environment: Win2k3, oracle10g, .net 2.

    I tried the two roads, insert into a table with 20 columns. The code cannot be run directly, it describes just my thought. Note * [Q n] * and answer me please.

    1. all OracleCommands share a single connection
    main()
    {
    CNN OracleConnection = new OracleConnection;
    [] Ths thread = new Thread [32]; 4 thread per CPU
    for (int j = 0; j < ths.) Length; j ++)
    {
    Thread th = ths [j] = new Thread (proc);
    FPE Start (CNN).
    }
    }
    the static object sync_obj = new object();
    Sub proc (object param)
    {
    OracleConnection cnn = param as OracleConnection;
    OracleCommand cmd is cnn. CreateCommand();
    cmd.CommandText =...; Insert statement on a specific table, using parameters
    Lock (sync_obj) / / * [Q1] *: why lock is necessary? Deleting the line will result in ORA-01036 occasionally, details at the end
    {
    cmd ExecuteNonQuery());
    }
    }


    2. a connection by OracleCommand
    main()
    {
    [] Ths thread = new Thread [32]; 4 threads per processor
    for (int j = 0; j < ths.) Length; j ++)
    {
    Thread th = ths [j] = new Thread (proc);
    Th. Start();
    }
    }
    Sub proc (object param)
    {
    CNN OracleConnection = new OracleConnection;
    OracleCommand cmd is cnn. CreateCommand();
    cmd.CommandText =...; Insert statement on a specific table, using parameters

    * [Q2] *: why the lock is useless for a successful run?
    cmd ExecuteNonQuery());
    }

    * [T3] * is it true that INSERT statement does not engage the table of data at all?
    * [T4] * as shown in the code, it is the rule that a single INSERTION in a same table capable of running at the same time in a single connection?

    In fact, I want to insert thousands of records to a table, each thread can insert several hundred.

    I appreciate if you can provide a detailed answer and I am very happy that you can send the answer to [email protected] , because I check the email more frequently than the OTN forum.

    * EXCEPTIONAL DETAIL WHEN LINE [Q1] IS DELETED *.
    Message = "" ORA-01036: invalid variable/index ""
    Source = "System.Data.OracleClient"
    ErrorCode =-2146232008
    Code = 1036
    StackTrace:
    At System.Data.OracleClient.OracleConnection.CheckError (OciErrorHandle errorHandle, Int32 rc)
    At System.Data.OracleClient.OracleParameterBinding.Bind (mustRelease, SafeHandle, OciStatementHandle statementHandle, NativeBuffer parameterBuffer, OracleConnection connection, Boolean & handleToBind)
    At System.Data.OracleClient.OracleCommand.Execute (rowidDescriptor, ArrayList, OciStatementHandle statementHandle, CommandBehavior behavior, Boolean needRowid, OciRowidDescriptor & resultParameterOrdinals)
    At System.Data.OracleClient.OracleCommand.ExecuteNonQueryInternal (Boolean needRowid, OciRowidDescriptor & rowidDescriptor)
    At System.Data.OracleClient.OracleCommand.ExecuteNonQuery)
    At ConsoleApplication1.Program.proc (Object param) POS D:\testing\ConsoleApplication1\ConsoleApplication1\Program.cs:Line 92
    At System.Threading.ThreadHelper.ThreadStart_Context (Object state)
    At System.Threading.ExecutionContext.Run (ExecutionContext executionContext, ContextCallback callback, Object state)
    At System.Threading.ThreadHelper.ThreadStart (Object obj)

    Yes, you do want to use Array Binding with CLOB or BLOB because of how OIC works. A round trip must occur to the database to get a lob index and a temporary lob must be built for each of them. If your type LOB data<32k you="" can="" bind="" them="" as="" varchar2/raw="" and="" you="" should="" see="" a="" performance="" increase,="" but="" if="" you="" do="" indeed="" need="" to="" bind="" as="" lob="" you'll="" want="" to="" do="" it="" via="" single="">

    Greg

  • Remote index not used with INSERT in the local table on dblink

    Hi all

    I don't know if anyone has come across this problem before, but for some reason any the remote index remains unused ONLY* in the insertion on the local database operation. Let me explain this pseudo-device code

    insert into LOCAL_TABLE
    Select / * + index_combine (alias_remote_tab IDX_LOG_DATE) * /.
    trunc (log_datetime),
    Count (*)
    of REMOTE_TABLE@DBLINK alias_remote_tab
    When trunc (log_datetime) = trunc(sysdate-1)
    Trunc Group (log_datetime);

    where:
    REMOTE_TABLE is a table partitioned on log_datetime (monthly)
    IDX_LOG_DATE is an index of bitmap of based on a valid function on log_datetime created in the trunc (log_datetime)
    local database: 10 gr 2
    remote database: 11 GR 1 material
    OS: windows (both)

    More funny thing is when I just run the select query independently on the only both local and remote, the index is used. I checked by printing the command explain for the select query plan. But when I prefix the query with the insert lose all hell breaks and local database plays the ignorance about the index. The command for the insert query explain plan has no mention of the index even when I explicitly place the index indicator in the select part of the query.

    If this should not be simple enough for ORACLE? Am I missing something here?

    Jonathan describes the details and the reasons for the behavior you see in following blog post http://jonathanlewis.wordpress.com/2008/12/05/distributed-dml/
    Your SELECTION is performed remotely (filtering and grouping) and sends only the (relatively) small results via dblink local database while the in an INSERTION, filtering only occurs at the remote and data site (relatively) important are sent via dblink to the local database, the consolidation takes place.
    You can give a try to the approach proposed by michaels2. If the approach from the view result grouping and filtering which will take place in the remote database, you will see improved performance.

    PS BTW, if the sql code that I suggested to check the plane, in my previous post using an index, then the cause of your performance issue is certainly not due to the index not used and is due to the amount of data transferred to dblink.

  • Specify the type of data when using the "CREATE TABLE AS SELECT"?

    In the table creation code I'm trying to create a DATE column called DOB below. However, the resulting DOB column in the condamnes2 table is a Varchar2. How can I specify that I want to be a DATE field DOB? (I know that I can create the structure first and then fill it but this isn't what I want.)

    create the table condamnes2
    SELECT Person_ID,
    decode (year of BIRTH, null, null, to_date (nvl(BIRTHMONTH,1) |)) » /'|| NVL(Birthday,1) | » /'|| NVL (BIRTHYEAR, 1500), ' MM/DD/YYYY')) DOB
    Among the people

    Thank you

    Use the CAST function in your decoding:

    SQL> create table Persons2 as
      2  SELECT Person_ID,
      3         decode(BIRTHYEAR
      4               ,null, cast(null as date)
      5               ,to_date(nvl(BIRTHMONTH,1)||'/'||nvl(birthday,1)||'/'||nvl(BIRTHYEAR,1500),'MM/DD/YYYY')) DOB
      6  from   persons
      7  ;
    
    Table created.
    
    SQL> desc persons2
     Name                                      Null?    Type
     ----------------------------------------- -------- ----------------------------
     PERSON_ID                                          NUMBER
     DOB                                                DATE
    

Maybe you are looking for

  • How do you would remove each item 6 (or Nth) in a table?

    Thanks to altenbach (see http://forums.ni.com/t5/LabVIEW/How-can-I-restructure-an-unfavourable-2D-array-format-to-1D/m-p/2463... I managed to organize my 2D table data in a column of 1 d. Unfortunately, the format of the data files is delimited by ta

  • Blue screen on Vista after the Green loading bar

    After turning on the computer, the message "loading Windows files" with a white bar in below appears, then the normal green roof bar. Shortly after, the screen becomes fast black, then blue. The mouse arrow stays on the screen and nothing happens, no

  • Double U2515H monitors the 1440 p (2560 x 1440) questions

    Hello I just bought 2 new 25 "U2515H monitors x. I wanst able to run 1440 p HDMI/DVI, so I updated my graphics card to a Gigabyte 960 GTX 4 GB card which has 3 display Ports. I am now under my monitors through mini-DP DP on my 960 GTX, using the cabl

  • Slow sales in Q3

    I observe normally slow sales (reduced revenues) in Q3.  But I was wondering if other developers also have similar observations?

  • Inaccessible links

    There are various links posted by support staff in the 'Ask The Experts' discussion that I can't access. Here are some examples: http://www.Cisco.com/en/us/partner/docs/security/IPS/7.0/Configuration/Guide/IME/ime_getting_started.html#wp1233087 http: