Urgent: DELETE, INSERT, SELECT INTO, UPDATE Performance problem

Hello

NEED HELP TO OPTIMIZE the INSERT (insertion in select):
=================================================
We have a report.

According to the current design the following steps are used to populate the custom table that is used to signal the end:

(1) REMOVE all records of the custom table XXX_TEMP_REP.
(2) INSERT records in the custom of table XXX_TEMP_REP (assumes all records associated with type A)
using
INSERT... IN... SELECT...
statement.

(3) update records in XXX_TEMP_REP
using a custom logic for records populated.

(4) INSERT records in the custom of table XXX_TEMP_REP (records associated with type B)
using
INSERT... IN... SELECT...
statement.

Collected statistics related to Insert statement are:

Wait event information
---------------------

Event expects 460 SID: db file sequential read
P1 text: file No.
P1 The value: 20
P2 text: block #.
P2 Value: 435039
P3 text: blocks
P3 Value: 1


Session statistics
-----------------

size: 293,84 M
Parse count (hard): 34
Parse count (total): 1217
the user undertakes: 3

Transaction and Rollback Information
-----------------------------------

Rollback used: 35.1796875 M
Rollback Records: 355886
Rollback Segment number: 12
Rollback Segment name: _SYSSMU12$
Logical IOs: 1627182
Physical IOs: 136409
RBS Startng measure ID: 14
Transaction start time: 29/09/10 04:22:11
Transaction_Status: ACTIVE

Please suggest how this can be optimized.

Kind regards
Ngandu

Hello

Y at - it nothing to do with the Oracle Forms tool?

François

Tags: Oracle Development

Similar Questions

  • Remove all lines and insert them into Oracle can make performance worse?

    I m working in a project that I need to make a batch update regularly (every 4 months) of excel files. These files have doesn´t excellent key in their ranks.

    The development of a code that deletes all lines and inserts the entire base again is easier than one who checks in all the ranks of its primary key and if necessary update. (sometimes may be a key to 5 columns).

    My question is: if I delete all the rows in the tables of the insert it again, it will cause tablespace fragmentation and in a future loss of performance?

    Is there a way to avoid this?

    Thanks in advance

    Alexander

    This response helped me a lot.

    Thank you all

    Remove all lines and insert them into Oracle can make performance worse? -Stack overflow

  • Insert / * + append * / into TableName Select / * + parallel (t, 4, 1) * / * from Tabl

    Hello

    I use
    Insert  /*+ append */   into TableNew select  /*+ parallel( t,4,1) */  * from TableOld t
    to load data into a table through TableNew
    another table of the same structure - TableOld

    More than 4-5 hours it takes me to do this for about 1.5 million lines with a column defined as XMLType as well.

    -Are there any rules to determine the degree of parallelism, or is he just hit and a trial?
    - Or set none and let the optimizer?
    -Any other rule, I keep in mind to try to optimize this operation?

    Thank you...

    Published by: BluShadow on December 6, 2012 11:20
    addition of {noformat}
    {noformat} tags for readability.  Read: {message:id=9360002} and learn to do this yourself                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    

    Ben hassen says:

    user8941550 wrote:
    So I tried,

    Insert / * + append * / into TableNew select * from TableOld t
    and it is now running in an hour.

    No explanation for this.

    I don't recommend this way (insert into... Select..) for large tables.
    Because you have no ability to run a commit when inserting (first at the end of the insertion process that you can commit).
    It takes much more space-cancellation (if you run rollback after the insert statement).

    But, principal in blocks or pieces is not a transactional point of view and can cause more problems later if there is an error in one of the blocks of inserts, because you will not be able to easily restore data that has already been posted.
    I highly recommend to use a single INSERT. SELECT... statement in most cases (it's always going to be rare exceptions), simply to maintain transactional integrity.

    A better way is, define a cursor, read for blocked lines (e.g. 1000 lines), after every 1000 lines run a commit.

    That's a great way to a) slow down the process and b) make it difficult to restore it if a problem occurs.
    Whenever you issue a commit, it tells Oracle to write the data in the data files. Which is performed by the writer processes. By default, the database starts with N number of processes of writer ("N" depends on the configuration of database, so let's assume 4 processes of writer for example). Whenever a commit is issued, a writer process gets allocated to the task of writing the data to the data files as defined. The workload is shared between these processes, so if a further validation is issued and a single process for writer is already busy, then another process will consider this request. If the database starts to do a lot of approvals issued, and concludes that all the existing processes of the writer are busy, it will generate new writer processes to process new requests, taking up more resources (memory, files, and processes handles etc.) server and the writer process more, there is more chance they have of creating States of waiting between them as they all try and write to the data files (same) especially if the data is committed are closely for example same table and associated tablespace etc.). These resources are not released again when the tasks are complete, until the server is restarted or the stop of the Oracle, Oracle keeps them there in waiting, it will get the same kind of workload once again. Oracle is perfectly capable of treating millions of lines, suggesting that it is 'better' to insert into pieces of 1,000 records at a time, having committed after each, not only slows down the process of pl/sql code perspective, but also slows down things from a server resources point of view, as well as causing potential claim to IO file issues... including the possibility of having some i/o contention problems with disks. It is certainly NOT "a better way".

    These suggestions are usually given by people who do not understand the underlying architecture of databases Oracle, something which is taught on the Oracle DBA courses, and it's a course I would recommend that any so-called professional developer attends, even if they did not intend to be a DBA. How works the architecture of a database is probably useful to know how to write good code... not only in the Oracle, but some other RDBMS establishments (when I was a developer of Ingres database, I have followed the courses of Ingres DBA and it was also valuable to understand how to write good code).

  • How to keep the (update or delete, insert) triggering action in the table?

    Hello experts,
    I have create a trigger on update, delete or insert. I insert records into a table which records insert, update, or delete.
    I would also like to insert the revival of whick action the trigger as update, delete or insert.

    Here's my trigger code.

    create or replace trigger BOM_HISTORY_TRIGGER
    after update or delete or insert on BOM_HISTORY
    for each line
    Start

    insert into bom_history (BOM_MOD_CODE, BOM_ASSY_CODE, BOM_ASSY_QPS_old, BOM_ASSY_QPS, PPC_SRL_ #, ITEM_ENTR_BY,)
    ITEM_MOD_BY, MOD_DATE, ACTION)
    values(:Old.) BOM_DIV_CODE,: old. BOM_MOD_CODE,: old. BOM_ASSY_CODE,
    : old. BOM_ASSY_QPS,: new. BOM_ASSY_QPS,: old. PPC_SRL_ #,: old. ITEM_ENTR_BY,
    : old. ITEM_MOD_BY, SYSDATE, 'delete')
    END;

    HOW to keep the action in the table.

    also, there is error:
    WARNING: Trigger created with compilation errors.

    Please help to fix it.

    Thank you best regards n
    Yoann

    >
    insert into bom_history (BOM_MOD_CODE, BOM_ASSY_CODE, BOM_ASSY_QPS_old, BOM_ASSY_QPS, PPC_SRL_ #, ITEM_ENTR_BY,)
    ITEM_MOD_BY, MOD_DATE, ACTION)
    values(:Old.) BOM_DIV_CODE,: old. BOM_MOD_CODE,: old. BOM_ASSY_CODE,
    : old. BOM_ASSY_QPS,: new. BOM_ASSY_QPS,: old. PPC_SRL_ #,: old. ITEM_ENTR_BY,
    : old. ITEM_MOD_BY, SYSDATE, 'delete')
    >
    Unless I've counted wrong you have 9 mentioned columns but try insert 10 items.

  • Performance problems since the update of 2015.6

    Hello together,

    Since I updated lightroom yesterday to version 2015.6 the performance on my machine is incredible bad.

    My system:

    -MacBook Pro (retina, mid-2012)

    -2.3 GHz Intel Core i7

    -16 GB 1600 MHz DDR3

    -NVIDIA GeForce GT 650M 1024Mo

    -OS X El Capitan (10.11.5)

    Lightroom synchronizes actually 1400 photos, but even if I stop this process, LR uses CPU so I can't work in other programs like Photoshop (updated yesterday, too). I detected this performance problem as PS freezes when I used the Healing Brush tool.

    LR.jpg

    I don't know what's happening LR, but is not ideal for my workflow.

    Does anyone have an idea?

    If I close LR I can work in PS, even though the Healing Brush tool is slower than in the version before. But I can't always close and open LR.

    Thank you

    regards Denis

    There are already dozens of threads discussing the same question in this forum as well as in the official comments forum here: https://feedback.photoshop.com/photoshop_family/categories/photoshop_family_photoshop_ligh troom

    You can restore 6.5.1 if you wish. Here's how: How do I roll back to Lightroom 2015.1.1 or Lightroom 6.1.1? Lightroom Queen

    Just replace 6.1.1 with 6.5.1

  • 2015 Dreamweaver does not have ready php functions (insert, delete and select)

    Hello all the dreamweaver configuration is correct, but when I use the ready functions insert, select, delete, login (php in the database) does not work, I tried everything already and what works is to do it manually. What can you do for me?

    See here http://www.dmxzone.com/go/21842/enable-server-behaviors-and-data-bindings-panel-support-fo r-dreamweaver-cc

  • ESX 3.5 Update 4 problems of performance/guest chef

    We have recently upgraded to 4 upgrade and now a week later I noticed some drastic performance problems in our VMs. VMs (Win2k3-64 bit, 2 K 8-64-bit, XP Pro) that used to take about a minute to start now take 5 minutes. Operation inside these machines is extremely slow as well; Connect simply another takes a few minutes and each action takes some time. What is curious, it's that there is no visible errors in newspapers (VC and Windows) and performance counters all look good. Also, if I 'Export' a virtual machine of my ESX infrastructure to a 'free' ESXi host I use for testing, the virtual machine performs very well.

    We are running ESX3.5 w / VirtualCenter (separate physical machine) on 3 - Quad Core 3 GHz hosts with 24 GB of Ram each connected to a FIbre Channel SAN.

    Someone at - it another experinced no degradation after the upgrade to 4 updated? Help, please.

    I had similar problems in my configuration of lab. Turns out that shares that have been granted for VMS and resource pools have been completely stripped. As a temporary workaround, I just removed all the resource pools disabling the DRS on my test group.

    Another question, I lived after the upgrade to U4, is that the machine virtual networks randomly started showing very high latency from time to time (for example: a physical workstation I ping to a virtual machine that is on the same gigabit switch - latencies range from 10 ms and ms 2000!). I could not determine the exact circumstances in which this occurs, but it certainly has a devastating effect on the boot time of my machines. I think of messing with the memory of service console & Reservations CPU + Crescent "system resources" one little solved the issue but I'm not sure 100%, which was the root cause of (this was a common solution to VM network latencies in previous versions of ESX). In the meantime, I also upgraded my VMware Tools for version U4 (which has a new vmxnet driver) and I didn't notice the problem again since. Again, I'm not sure that this is the cause first and solved the problem, because of its random occurrence.

  • Performance problem when a direct IO option is selected

    Hello Experts,

    One of my clients has a performance problem when a direct IO option is selected. Reports that there was increase in the memory usage of storage Direct to e/s is selected compared to the option of buffering of i/o.

    There are two applications on the server of type OSB. When using IO buffer warnings, has experienced a high level of read and write I / O direct i/o reduced the read and write I / O, but greatly increases the memory usage.

    Other information-

    (a) Details of the environment

    HSS - 9.3.1.0.45, AA - 9.3.1.0.0.135, Essbase - 9.3.1.2.00 (64-bit)
    OS: Microsoft Windows x 64 (64-bit) 2003 R2

    (b) what is the use of memory when it is buffered for i/o and direct i/o is used? What running of calculations, restructuring of the database and the database queries? These processes take a lot of time for execution?

    Application 1: 700 MB buffered, 5 GB live
    Application 2: Buffering 600 MB to 1.5 GB, 2 GB Direct
    Calculation time can increase from 15 minutes to 4 hours. Even with the restructuring.


    (c) what is the current database cache; Cache data files and cache Index values?

    Application 1: Buffering (index 80 MB, 400 MB of data), Direct (Index 120Mo;) 4 GB data file, given 480 MB).
    Application 2: Buffering (index 100MB, 300 MB of data), Direct (Index of 700 MB, 1.5 GB, 300 MB of data from data file)


    (d) what is the total size of the files ess0000x.pag and ess0000x.ind files?

    Application 1: Page 20 GB, 1.7 GB of the Index file.
    Application 2: Page 3 GB, index of 700 MB.

    Any suggestions on how to improve performance when direct i/o is enabled? All performance records in above scenario would be a great help.

    Thanks in advance.



    Kind regards
    Sudhir

    This post understanding buffered of e/s and e/s direct may be of some use to you.

    See you soon

    John
    http://John-Goodwin.blogspot.com/

  • calculate statistical performance problems

    Hi all

    We run statistical calculation on a table that is completely deleted daily, filled and then analyzed. However, the statistical component of calculation takes more and more time each day. The table retains a kind of diary/history of statistics that were calculated before the deleted data? Or is there a reason why statistical calculation on about the same amount of data every day should take longer and longer time? We took 10g.

    A few notes:
    I can't truncate table because I need to restore in case of problems. I could drop and recreate once if it helps.
    Before deleting and filling, I disable all indexes, and then re-create them.
    Transfer us about 10 million rows every day in this table.
    A year of the entire process (fill + analyze) has taken 3 hours, now it takes 18 hours.
    We also got this error on the table: ORA-01555: snapshot too old: rollback segment number with the name ° ORA-02063 too small: before the T_SRC line


    This is the script in the procedure:
        execute immediate 'alter index IND_TAB1 unusable';
    
        delete from TAB1;
    
        insert /*+ APPEND */ into TAB1
        select
          COL1
         ,COL2
          from V_SRC1
         where change_date   between sysdate-1 and sysdate
            or creation_date between sysdate-1 and sysdate;
    
        execute immediate 'alter index IND_TAB1  rebuild';
    
        execute immediate 'analyze table TAB1 compute statistics';
    V_SRC1 estimated T_SRC table on the remote system.

    Thank you for the ideas.

    Hello

    When you insert and remove a large amount of data in a table, it breaks and he could accumulate a significant amount of unused disk space. When you perform a full table scan, Oracle scans the segment of the upper limit (HWM) that points to the location of the block used above, i.e. it scans all empty blocks as well. In order to reset the HWM, you can issue ALTER TABLE MOVE. Despite what the name suggest command, your table will not be moved anywhere, it will simply rebuild and recover any unused free space. If there are all the indexes defined on the table, they would have to rebuild after that as well (ALTER INDEX REBUILD of ).

    Alternatively, you may consider partitioning by date range (if your license allows it). With partitioning everything would become much simpler:

    (1) you load your data into the table and everything check
    (2) if it is ok, you just drop the previous partition; Otherwise, restore.

    With regard to the CALCULATION - it goes through all the data to calculate statistics, instead of using a small sample (in general, a few percentage points). It is slower than the ESTIMATE, and in most cases does not provide any advantage over it.

    DBMS_STATS ANALYZE vs - using ANALYZE for the collection of statistics concerning is now obsolete and not recommended. DBMS_STATS offers the same features, and much more, given that your version is 10g you should use DBMS_STATS and not to ANALYZE.

    Best regards
    Nikolai

  • Update assignment problem

    Hi all
    I update an assignment instead of correction and changed the Center cost and the position now because of this payroll is the calculation of salary and allowances twice, is possible that I can fix this problem and reverse the update in the correction.

    Kind regards

    Why don't delete you your record updated day then re - enter the correct data with the correction mode!
    Assignment > choose your date range in the date where you insert your record updated - 1
    Delete the folder and selected according
    Save the changes and re - query your data, then perform the required correction.

  • Select for update doesn't work for me

    I have a table that stores a set of numbers. These numbers, when used, should not be used again within 365 days. In addition, different process requesting a number should be split different numbers. Basically, a number must be used only once.

    Here's the code I used to allocate 100 numbers. It is supposed to select a line, try to lock it using 'UPDATES'. If he is able to select, I guess that it is available and update the LASTMODTM. It's impossible to choose, this means another process has locked, and I will try the next number.

    I've run 2 process running the code below together, expects that each process will get 2 unique series of numbers, but I'm left with exactly the same list of numbers for the two methods. I believe that SELECT for UPDATE is not working, that's why the two processes are able to choose the same line. Any advice for me?

    DECLARE

    v_nbr NUMBER_POOL. NBR % TYPE: = null;

    CNT INTEGER: = 0;

    v_nbrlist VARCHAR2 (32676): = ";

    BEGIN

    FOR x IN)

    SELECT THE ROWID RID

    OF NUMBER_POOL

    WHERE

    AND SYSDATE - LASTMODTM > 365

    ORDER OF LASTMODTM, NBR

    )

    LOOP

    BEGIN

    -To lock the line so that it be referred to the other application

    SELECT MAWB

    IN v_nbr

    OF NUMBER_POOL

    WHERE ROWID = x.RID

    FOR UPDATE NOWAIT.

    EXCEPTION

    -Impossible to line lock, this means that this number is locked by another process at the same time. Try the next nbr

    WHILE OTHERS THEN

    CONTINUE;

    END;

    UPDATE NUMBER_POOL

    SET LASTMODTM = SYSTIMESTAMP

    WHERE ROWID = x.RID;

    CNT: = cnt + 1;

    v_nbrlist: = v_nbrlist | ',' || v_nbr;

    IF cnt = 100 THEN

    DBMS_OUTPUT. Put_line (SUBSTR (v_nbrlist, 2));

    EXIT;

    END IF;

    END LOOP;

    END;

    TKS all for your advice. I solved my problem

    Sorry - but that does NOT solve your problem. It may "seem" to simple tests you run, but he does not take into account how locking and consistency reading actually works in Oracle.

    It turns out that the 2 process has no conflict each othe at all, because the select sql loop took 0.2 sec, while 100 loops completed almost instantly. So the 2 process select almost at the same time, when first completed the process select, he received 100 numbers in a very short period of time, until the 2nd process is to select always. When the arrival of select 2nd process, the first process had already finished and committed the change. That is why the 2nd process does not meet lock exception.

    Yes--but it's NOT the caue in the root of your problem. The problem is you are using TWO separate queries. The first request is to determine which rows to select, but it IS NOT lockingk these lines. This means that any other user can also select or even lock the rows that were selected only by the first user and the first query.

    To fix this, I simply added "AND SYSDATE - LASTMODTM > 365" for numbers used will not be selected. The select for update query will be raise exception and then continue the loop to try the next number.

    No - that will NOT fix the problem if you always use a separate SELECT statement for the first query.

    A query, even a SELECT query, establishes sthe YVERT point-in-time for the data. This query second "select for update" is still being used by lines that were selected only, but not locked, the first query. This means that an another session/user may have selected some of these same rows in their first query, and then run the second query before and even during your attempt to execute your second query.

    Tom Kyte shows, in the link I gave

    In addition, there are some "read consistency problems" here.

    The min (job_id) select where status = 0, which runs... It returns the number 100 (for example).

    Update us this line - but have not committed yet.

    Someone calls it, gets them min (job_id) select... you guessed it, 100. But they block on the update.

    Now commit you and you have 100. They have now released and - well - update the SAME row...

    for example: your logic does not work. You EF the same record N process sometimes.

    The solution is to LOCK the lines with the first query so that no other session can perform DML on these lines. SKIP LOCKED clause in 11g is what you use.

    See the example of Tom Kytes in its first reply in this thread;

    https://asktom.Oracle.com/pls/asktom/f?p=100:11:0:P11_QUESTION_ID:2060739900346201280

    Can you show how I can avoid using the QA and use SKIP LOCKED to elements of process simultaneously unpretentious? ...

    Suppose you want to get the first line of a table that is not currently locked and corresponds to a unique key - or otherwise (single would not make sense, he would have one - and this would be trivial to determine whether it is locked or not, without the 'skip locked', you would use just nowait).

    so the case of unique key is not remotely interesting, should not jump - needed ever - locked, just a nowait.

    You come to question and search, for example:

    OPS$ % ORA11GR2 tkyte > select empno


    scott.emp 2
    3 where job = 'CLERK '.
    4.

    EMPNO
    ----------
    7369
    7876
    7900
    7934

    OPS$ % ORA11GR2 tkyte >
    OPS$ % ORA11GR2 tkyte > declare
    l_rec 2 scott.emp%rowtype;
    slider 3 c is select * from scott.emp job where = 'CLERK' for update skip locked;
    4 start
    5. open c;
    6 retrieve c in l_rec;
    7. close c;
    8 dbms_output.put_line (' I got empno = ' | l_rec.empno);
    9 end;
    10.
    I myself empno = 7369

    See the 'for UPDATE SKIP LOCKED' in the definition of the cursor? Which LOCKS the lines that are needed AND causes code pass all lines which can already be locked instead of throwing an exception as NOWAIT would.

    Allows you to get some lines unlocked are available that meet your query and prevents the contention with other users.

    NOTE: there is only ONE query.

    Summary - This is your SELECT FIRST query that does NOT lock the lines are selected. It IS THE CAUSE of your problem. Your 'two query' solution ' will NOT work. Use the solution presented by Tom.

  • table name not valid error when inserting values into a table

    I use the following statement to insert values into a table:

    curs. Execute ("INSERT INTO _ * '%s' * _ VALUES ((SELECT MAX (REC_ID) + 1 OF GSAP_MSG_IN), (SELECT MAX (gsap_msg_id) + 1 OF GSAP_MSG_IN), 'SHELLSAP', sysdate, '%s', EMPTY_BLOB(), 1, SYSDATE, EMPTY_BLOB (), SYSDATE)" %(*table_name*,file_extension)) ")

    whence table_name the following statement

    table_name = ' config.staging_db_tablesNames ['in_msgs]

    as I created a configuration file for all parameters that can change. The value of the table in the audit using a print command is correctly, but when put in the query above to run the insert statement gives an error. The following is the summary of comprehensive performance where you can see the table name as

    $ python gsapscnr.py
    Vote for the data files in/home/mh/inbox /...

    GSAP_MSG_IN
    Traceback (most recent call changed):
    File "gsapscnr.py", line 147, in it?
    poll_for_data()
    File "gsapscnr.py", line 86, in poll_for_data
    Sorter = load_details_first)
    File "gsapscnr.py", line 42, survey
    curs. Execute ("INSERT INTO '%s' VALUES ((SELECT MAX (REC_ID) + 1 OF GSAP_MSG_IN), (SELECT MAX (gsap_msg_id) + 1 OF GSAP_MSG_IN), 'SHELLSAP', sysdate, '%s', EMPTY_BLOB(), 1, SYSDATE, EMPTY_BLOB (), SYSDATE)" %(table_name,file_extension)) ")
    cx_Oracle.DatabaseError: ORA-00903: invalid table name

    Can anyone help with this problem please. I'm passing the value of the table in a bad way. Also if anyone can suggest a good tutorial for paythong programming using cx_Oracle.

    Concerning

    Print the SQL string that you establish, cut and paste it this output in SQL * more and see if it runs. This may show you that you should remove the single quotes around the name of the table %s in the Python file.

  • Doubt about inserting data into a table

    Hi all, when I try to insert data into a table through an anonymous block, the pl/sql block runs successfully, but the data are not get inserted. Can someone please tell me where I am doing wrong?
    SQL> DECLARE
      2
      3  V_A NUMBER;
      4
      5  V_B NUMBER;
      6
      7  v_message varchar2(25);
      8
      9
     10  BEGIN
     11
     12
     13  select regal.regal_inv_landed_cost_seq.NEXTVAL into V_A from dual ;
     14
     15  select regal.regal_inv_landed_cost_seq.currval into V_B from dual ;
     16
     17  INSERT INTO rcv_transactions_interface
     18  (
     19               INTERFACE_TRANSACTION_ID,
     20               HEADER_INTERFACE_ID,
     21               GROUP_ID,
     22               TRANSACTION_TYPE,
     23               TRANSACTION_DATE,
     24               PROCESSING_STATUS_CODE,
     25               PROCESSING_MODE_CODE,
     26               TRANSACTION_STATUS_CODE,
     27               QUANTITY,
     28               LAST_UPDATE_DATE,
     29               LAST_UPDATED_BY,
     30               CREATION_DATE,
     31               CREATED_BY,
     32               RECEIPT_SOURCE_CODE,
     33               DESTINATION_TYPE_CODE,
     34               AUTO_TRANSACT_CODE,
     35               SOURCE_DOCUMENT_CODE,
     36               UNIT_OF_MEASURE,
     37               ITEM_ID,
     38               UOM_CODE,
     39               EMPLOYEE_ID,
     40               SHIPMENT_HEADER_ID,
     41               SHIPMENT_LINE_ID,
     42               TO_ORGANIZATION_ID,
     43               SUBINVENTORY,
     44               FROM_ORGANIZATION_ID,
     45               FROM_SUBINVENTORY
     46  )
     47
     48  SELECT
     49       regal.regal_inv_landed_cost_seq.nextval,      --Interface_transaction_
    id
     50       V_A,                                          --Header Interface ID
     51       V_B,                                          --Group ID
     52       'Ship',                                       --Transaction Type
     53       sysdate,                                      --Transaction Date
     54       'PENDING',                                    --Processing Status Code
    
     55       'BATCH',                                      --Processing Mode Code
     56       'PENDING',                                    --Transaction Status Cod
    e
     57       lc.quantity_received,                          --Quantity
     58       lc.last_update_date,                          --last update date
     59       lc.last_updated_by,                           --last updated by
     60       sysdate,                                      --creation date
     61       lc.created_by,                                --created by
     62       'INVENTORY',                                  --Receipt source Code
     63       'INVENTORY',                                  --Destination Type Code
     64       'DELIVER' ,                                    --AUT Transact Code
     65       'INVENTORY',                                  --Source Document Code
     66        msi.primary_uom_code ,                       --Unit Of Measure
     67        msi.inventory_item_id,                        --Item ID
     68        msi.primary_unit_of_measure,                  --UOM COde
     69        fnd.user_id,
     70        V_A,                                         --Shipment Header ID
     71        V_B,                                         --SHipment Line ID
     72        82,                                           --To Organization ID
     73        'Brooklyn',                                     --Sub Inventory ID
     74        81,                                            --From Organization
     75        'Vessel'                                       --From Subinventory
     76
     77    FROM
     78       regal.regal_inv_landed_cost_tab lc,
     79       fnd_user fnd,
     80       mtl_system_items msi
     81
     82    WHERE
     83       lc.organization_id = msi.organization_id
     84       AND  lc.inventory_item_id = msi.inventory_item_id
     85       AND  lc.created_by = fnd.created_by;
     86
     87  commit;
     88  v_message := SQL%ROWCOUNT;
     89  dbms_output.put_line('v_message');
     90  END;
     91  /
    v_message
    
    PL/SQL procedure successfully completed.
    SQL> select * from rcv_transactions_interface;
    
    no rows selected
    Thanks in advance!

    There is no problem with inserting data!
    Only there is no data! This means that your select statement retrieves no rows.
    You can see the output of your program (0). This means that there where no line in the result set.

    Please check the output of your tax return independently:

    SELECT
    --        regal.regal_inv_landed_cost_seq.nextval,      --Interface_transaction_id
     --       V_A,                                          --Header Interface ID
    --        V_B,                                          --Group ID
            'Ship',                                       --Transaction Type
            sysdate,                                      --Transaction Date
            'PENDING',                                    --Processing Status Code
            'BATCH',                                      --Processing Mode Code
            'PENDING',                                    --Transaction Status Code
            lc.quantity_received,                          --Quantity
            lc.last_update_date,                          --last update date
            lc.last_updated_by,                           --last updated by
            sysdate,                                      --creation date
            lc.created_by,                                --created by
            'INVENTORY',                                  --Receipt source Code
            'INVENTORY',                                  --Destination Type Code
            'DELIVER' ,                                    --AUT Transact Code
            'INVENTORY',                                  --Source Document Code
             msi.primary_uom_code ,                       --Unit Of Measure
             msi.inventory_item_id,                        --Item ID
             msi.primary_unit_of_measure,                  --UOM COde
             fnd.user_id,
      --       V_A,                                         --Shipment Header ID
    --         V_B,                                         --SHipment Line ID
             82,                                           --To Organization ID
             'Brooklyn',                                     --Sub Inventory ID
             81,                                            --From Organization
             'Vessel'                                       --From Subinventory
         FROM
            regal.regal_inv_landed_cost_tab lc,
            fnd_user fnd,
            mtl_system_items msi
         WHERE
            lc.organization_id = msi.organization_id
            AND  lc.inventory_item_id = msi.inventory_item_id
            AND  lc.created_by = fnd.created_by;
    

    Published by: hm on 13.10.2011 23:19

    I removed the references of the sequence and the variables V_A and YaeUb.
    BTW: Why do you want to include V_A and YaeUb in two different columns?

    The use of sequences in your code seems a bit strange to me. But this has nothing to do with your question.

  • Insert data into Oracle DB from MS Access Forms

    Dear professionals,
    How to insert data into the table that reside in the Oracle DB through MS Access forms?
    We have already created ODBC link tables Oracle allows you to select data, and it works.
    Unfortunately, we can select only the data, but insert, delete and update are not available via MS Access form, even if the user has all permissions on the oracle DB (grant select, insert, update, delete on oracle_table to access_user).


    driver: Microsoft ODBC for Oracle
    MS Access 2003
    Oracle DB 10.2

    THX in advance,
    Adnan

    One would need to know what means "not available". You get a specific error? As far as I know your statement is simply incorrect.
    There are perhaps incompatibilties between DLLs Microsoft and Oracle, I would avoid the Microsoft ODBC for Oracle driver like the plague!

    ---------
    Sybrand Bakker
    Senior Oracle DBA

  • WRT54GS - wireless performance problem

    Dear someone

    I am currently having performance problems with my WRT54GSv1.1

    Just received my new internet connection to 30Mbit. Unfortunately, I am not able to download at that speed. If I connect via one of the linksys router's ethernet ports, I'm a happy person, but using the wireless makes me a little less happy.

    Wireless: 15Mbit/s

    Through the cable: 27Mbit/s

    Network information:

    The distance between the router and the computer laptop 3-4 meters

    Linksys WRT54GSv1.1 - Firmware Version: v4.71.4

    Network wireless - g only mode

    TV - 6 (because all the surrounding neighbor use 10-11, I noticed using Cain Abel & wireless discovered 4.9.29)

    Security: WPA2-Personal (TKIP + AES)

    All other settings are default (just reset to the factory settings).

    Laptop:

    Windows Vista Business SP1 32-bit

    Intel PRO/Wireless 3945ABG (installed the latest drivers from the website of intel, default settings)

    When you check the diagnostic of Intel tools, it tells me the link is 54 megabits and pushes meter packages of 54 megabits, while all lower speeds remain the same.

    More diagnostic information:

    Percent missed beacons: 0

    Percent transmit errors: 18

    Current Tx power: ~ 32 mW (100%)

    Supported power levels: 1.0 mW - 32.0 mW.

    Hope someone can help me here.

    Thanks in advance!

    Kind regards

    Ski Klesman

    Although wireless g "connects" to 54 Mbit/s, the maximum possible wireless "data rate" (in ideal laboratory conditions) is only about 20 to 25 Mbit/s.  Unfortunately, the phrase "in ideal laboratory conditions" generally excludes home intrusion!

    The aerial transmission for wireless connections is higher than for wired connections.   Thus, most individuals find that their Wi - Fi connection works at 50% to 70% of their speed of wired (LAN).  Your wired LAN being 27 Mbps connection speed, your 15 Mbps wireless speed is within normal limits.

    You might be able to tweak a bit more speed from your wireless network by optimizing all your wireless settings.  Here are a few suggestions:

    I guess you want actually to WPA2 encryption.  If so, set to AES only.   When you set the router to TKIP and AES, you actually tell the router to accept a WPA or WPA2 connection.

    Also, give your network a unique SSID. Do not use "linksys". If you use "linksys", you can try to connect the router to your neighbor. Also set 'SSID Broadcast' to 'active '. This will help your computer to find and lock on the signal from your router.

    Bad wireless connections are often caused by interference from other 2.4 GHz devices. This includes cordless phones, baby monitor wireless, microwave ovens, wireless mice and keyboards, wireless speakers and wireless network from your neighbor. In rare cases, Bluetooth devices can interfere. Even some 5 + GHz phones also use the 2.4 Ghz band. disconnect these devices and see if that solves your problem.

    In your router, try another channel. There are 11 channels in the band of 2.4 GHz channel 1, 6 or 11 generally works better. Discover your neighbors and see what channel they use. Because the channels overlap, try to stay at least + 5 or - 5 channels of your more powerful neighbors. For example, if you have a powerful neighbour on channel 9, try any channel 1 to 4.

    Also, try putting the router about 4 to 6 feet above the ground in an open area. Do not place behind your screen or other computer equipment or speakers. The antenna must be vertical.

    In addition, in the computer, go to your wireless software and go to 'Favorite networks' (sometimes called 'Profiles'). There are probably a few listed networks. Remove any network called "linksys". Also remove any network that you don't recognize or that you no longer use. If your current network is not listed, enter its information (SSID, encryption (if any) and key (if any)). Select your current network and make your network by default, then set it to auto login. You may need to go to 'settings' to do this, or you may need to right click on your network and select 'Properties' or 'settings '.

    If you continue to have problems, try the following:

    For wireless g routers, try setting the "baud rate" at 54 Mbps.

    If you still have problems, download and install the latest firmware for your router. After an update of the firmware, you must reset the default router, and then configure the router again from scratch. If you have saved a router configuration file, DO NOT use it.

    I hope this helps.

Maybe you are looking for