Joining tables with the aggregate function

I have 4 tables and I joined and I got my output variables. Joiner_1
I have another table and I used joiner_1 fields to match this table and had joiner_2. Then, I used aggregator on joiner_2 to get the sum and max values.

I traced all my outputs of the target of joiner_1 table. and 2 fields I got in Joiner_2 I have them at home to the target table, but it gives me error saying:

API8003: The attributes of the connection target group is already connected to a source of incompatible data. Use Carpenter or fixed operator to join the data upstream first before plugging it in this operator.

How to do this.

Basically, what I'm trying to do is.

example my target table has 10 fields

I get 8 fields by adhering to a set of tables, 2 other fields I need to get another table by matching the two my first join of output fields.

If my first join returns 8 rows for each row, it returns I could have several lines in the table of another that I need to get the money and put it in my table of objectives for the other 2 fields. My target table should be 8 rows after this is all done.

If I join my table another one at my first joints I get more lines.

Thanks in advance.

Hello

Try this,

After the 4th Carpenter, I assume you have all three fields in there.

Add to your aggregator it and use a fith Carpenter and join the 4th and fifth Carpenter

you would have all three fields and the field of the aggregator

Then try to complete the objective.

If this does not work please let me know.

Published by: Dinesh.Sharma on June 8, 2009 09:31

Published by: Dinesh.Sharma on June 8, 2009 09:32

Tags: Business Intelligence

Similar Questions

  • Bug with the aggregate function and no group

    When I run the following query:
    with the_table as
    (
      select 1 as id, 100 as cost from dual
      union all select 2 as id, 200 as cost from dual
      union all select 3 as id, 300 as cost from dual
      union all select 4 as id, 400 as cost from dual
      union all select 5 as id, 500 as cost from dual
    )
    select id, cost
    from
    (
      select id, cost
      from the_table
      --
      union all
      --
      select 0 as id, sum(cost) as cost
      from the_table
      where 0 = 1
      -- group by 1
    )
    order by id;
    I get this result:
    ID    COST
    --  ------
     0  <null>     
     1     100
     2     200
     3     300
     4     400
     5     500
    However, when I "uncomment" the line "Group 1", the query works as expected (without the id = rank 0).

    This occurs even when "the_table" is an array.

    Someone else comes through this (and if so, how long is a problem)?

    The database is 11.2.0.2.0 64-bit.

    EDIT: It also happens without a Union - the following returns a single line (with null 0 and cost of id) without the Group By and no line with her:
    select id, cost
    from
    (
      select 0 as id, sum(cost) as cost
      from 
      (
        select 1 as id, 100 as cost from dual
        union all select 2 as id, 200 as cost from dual
        union all select 3 as id, 300 as cost from dual
        union all select 4 as id, 400 as cost from dual
        union all select 5 as id, 500 as cost from dual
      )
      where 0 = 1
      -- group by 1
    )
    order by id
    Edited by: Donbot February 15, 2012 10:29

    Donbot wrote:
    Someone else comes through this (and if so, how long is a problem)?

    The database is 11.2.0.2.0 64-bit.

    This is a documented behavior.

    http://docs.Oracle.com/CD/E11882_01/server.112/e26088/functions003.htm#SQLRF20035

    "
    All except COUNT (*) GROUPING and GROUPING_ID aggregate functions ignore NULL values. You can use the NVL function in the argument of an aggregation function to substitute a value for a null value. COUNTY and REGR_COUNT never return null, but return a number or zero. For all remaining functions of aggregation, * if the DataSet contains no line, * or if it contains only the rows with NULL values as arguments to the aggregate function, * then the function returns null.*
    "

  • columns to display with the aggregate function

    Hi all
    I have two sql queries. When I run them I get these results.
      SELECT DISTINCT fiscal_year, lvl_6_id, a.grp_id, segment1
         FROM ops_cv_extract a, fdev_group b
         WHERE a.grp_id = b.grp_id(+)
         AND lvl_6_id = 5694
         AND fiscal_year = 2009
         ORDER BY A.grp_id;
    
    sum(product_bkg) grp_id period_num segment1
    2009     5694     -999     
    2009     5694     777     
    2009     5694     888     
    2009     5694     61493     11555
    2009     5694     61495     11555
    2009     5694     62322     11555
    2009     5694     62537     11555
    2009     5694     62772     11555
    2009     5694     62773     11555
        SELECT SUM(product_bkg), grp_id, period_num
            FROM ops_cv_extract
            WHERE lvl_6_id = 5694
            AND  fiscal_year = 2009
            GROUP BY grp_id, period_num
            ORDER BY grp_id;
    
    sum(product_bkg) grp_id period_num
    1151105.97     -999     1
    1828943.89     -999     2
    1074665.97     777     1
    1061934.5     777     2
    -10750.1     888     1
    51484.12     61493     1
    9582.79     61493     2
    51404.67     61495     1
    -187.57     61495     2
    1123     62322     2
    9069.6     62537     1
    1353.42     62537     2
    18026.8     62772     1
    104.4     62772     2
    17251.68     62773     1
    600     62773     2
    I am writing two sql queries in a single request to get this result
    fiscal_year lvl_6_id sum(product_bkg) grp_id period_num segment1 
    2009 5694 1151105.97     -999     1
    2009 5694 1828943.89     -999     2
    2009 5694 1074665.97     777     1
    2009 5694 1061934.5       777     2
    2009 5694 -10750.1     888     1
    2009 5694 51484.12     61493     1  11555
    2009 5694 9582.79     61493     2 11555
    2009 5694 51404.67     61495     1 11555
    2009 5694 -187.57     61495     2 11555
    2009 5694 1123             62322     2 11555
    2009 5694 9069.6     62537     1 11555
    2009 5694 1353.42     62537     2 11555
    2009 5694 18026.8     62772     1 11555
    2209 5694 104.4     62772     2 11555
    2009 5694 17251.68     62773     1 11555
    2009 5694 600             62773     2 11555
    How is it possible.
    Thank you

    Published by: polasa on November 5, 2008 12:22

    We don't even have group sets to achieve this result:

    SQL> select a.lvl_6_id
      2       , a.fiscal_year
      3       , sum(a.product_bkg)
      4       , a.grp_id
      5       , a.period_num
      6       , b.segment1
      7    from ops_cv_extract a
      8         left outer join fdev_group b on (a.grp_id = b.grp_id)
      9   where a.lvl_6_id = 5694
     10     and a.fiscal_year = 2009
     11   group by a.lvl_6_id
     12       , a.fiscal_year
     13       , a.grp_id
     14       , a.period_num
     15       , b.segment1
     16   order by a.grp_id
     17  /
    
      LVL_6_ID FISCAL_YEAR SUM(A.PRODUCT_BKG)     GRP_ID PERIOD_NUM   SEGMENT1
    ---------- ----------- ------------------ ---------- ---------- ----------
          5694        2009            2398,32       -999          1
          5694        2009            2300,23       -999          2
          5694        2009            2543,55        777          1
          5694        2009               2390        777          2
          5694        2009            9834,92        888          1
          5694        2009           12132,46      61493          1      11555
          5694        2009            3209,23      61493          2      11555
    
    7 rijen zijn geselecteerd.
    

    Kind regards
    Rob.

  • analytical function and the aggregate function

    What are the analytical function and the aggregate function. What is the difference between them?

    Hello

    Analytic Functions : -.

    Analytical functions calculate a value of aggregation based on a group of lines. They differ from aggregate functions because they return several rows for each group. The Group of rows is called a window and is defined by the analytic_clause. For each line, a sliding window of lines is defined. The window determines the range of lines used for the calculations for the current line. Window sizes can be based on a physical number of rows or a logic as the time interval.
    Analytical functions are the last set of operations performed in a query with the exception of the last ORDER BY clause. Every joint and every WHERE, GROUP BY and HAVING clauses are met before the analytical functions are handled. As a result, analytic functions can only appear in the select list or the ORDER BY clause.
    Analytical functions are commonly used to calculate cumulative aggregates, moving, centered and considered.

    Aggregate functions : -.

    Aggregate functions return a line of single result based on the groups of lines, rather than on the unique lines. Aggregate functions can appear in selection lists, as well as in the HAVING and ORDER BY clauses. They are commonly used with the GROUP BY clause in a SELECT statement, where Oracle Database splits the rows in a table when asked or seen in groups. In a query that contains a GROUP BY clause, the select list items can be aggregation functions, GROUP BY constant expressions or expressions involving one of them. Oracle applies the functions of aggregation for each group of rows and returns a single result for each group line.
    If you omit the GROUP BY clause, Oracle then applies any aggregate functions in the select list for all rows in the table queried or the view. You use aggregate functions in the HAVING clause to eliminate groups of the output based on the results of aggregate functions, rather than the values of the individual lines of the queried table or view.

    Let me know if you feel any problem understanding.
    Thank you.

    Published by: varun4dba on January 27, 2011 15:32

  • Using the data logged in an interface with the aggragate function

    Hello

    I'm trying to use logged data from a source table in one of my interfaces in ODI. The problem is that one of the mappings on the columns target implies a function (sum) overall. When I run the interface, I get an error saying not "a group by expression. I checked the code and found that the columns jrn_subscriber, jrn_flag, and jrn_date are included in the select statement, but not in the group by statement (the statement group contains only remiaining two columns of the target table).

    Is there a way to get around this? I have to manually change the km? If so how would I go to do it?

    Also I'm using Oracle GoldenGate JKM (OGG oracle for oracle).

    Thanks and really appreciate the help

    Ajay

    "ORA-00979"when the CDC feature (logging) using ODI with Modules of knowledge including the aggregate SQL function works [ID 424344.1]
    Updated 11 March 2009 Type status MODERATE PROBLEM

    In this Document
    Symptoms
    Cause
    Solution
    Alternatives:

    This document is available to you through process of rapid visibility (RaV) of the Oracle's Support and therefore was not subject to an independent technical review.

    Applies to:
    Oracle Data Integrator - Version: 3.2.03.01
    This problem can occur on any platform.
    Symptoms
    After successfully testing UI integration ODI using a function of aggregation such as MIN, MAX, SUM, it is necessary to implement change using tables of Journalized Data Capture operations.

    However, during the execution of the integration Interface to retrieve only records from Journalized, has problems to step load module loading knowledge data and the following message appears in the log of ODI:

    ORA-00979: not a GROUP BY expression
    Cause
    Using the two CDC - logging and functions of aggregation gives rise to complex problems.
    Solution

    Technically, there is a work around for this problem (see below).
    WARNING: Problem of engineers Oracle a severe cautioned that such a type of establishment may give results that are not what could be expected. This is related to how ODI logging is applied in the form of specific logging tables. In this case, the aggregate function works only on the subset that is stored (referenced) in the table of logging and on completeness of the Source table.

    We recommend that you avoid this type of integration set ups Interface.
    Alternatives:

    1. the problem is due to the JRN_ * missing columns in the clause of "group by" SQL generated.

    The work around is to duplicate the knowledge (LKM) loading Module and the clone, change step "Load Data" by editing the tab 'Source on command' and substituting the following statement:
    <%=odiRef.getGrpBy()%>

    with
    <%=odiRef.getGrpBy()%>
    <%if ((odiRef.getGrpBy().length() > 0) && (odiRef.getPop("HAS_JRN").equals("1"))) {%>
    JRN_FLAG, JRN_SUBSCRIBER, JRN_DATE
    <%}%>

    2. it is possible to develop two alternative solutions:

    (a) develop two separate and distinct integration Interfaces:

    * The first integration Interface loads the data into a temporary Table and specify aggregate functions to use in this initial integration Interface.
    * The second integration Interfaces uses the temporary Table as Source. Note that if you create the Table in the Interface, it is necessary to drag and drop Interface for integration into the Source Panel.

    (b) define the two connections to the database so that separate and distinct references to the Interface of two integration server Data Sources (one for the newspaper, one of the other Tables). In this case, the aggregate function will be executed on the schema of the Source.

    Display related information regarding
    Products

    * Middleware > Business Intelligence > Oracle Data Integrator (ODI) > Oracle Data Integrator

    Keywords
    ODI; AGGREGATE; ORACLE DATA INTEGRATOR; KNOWLEDGE MODULES; CDC; SUNOPSIS
    Errors
    ORA-979

    Please find above the content of the RTO.
    It should show you this if you search this ID in the Search Knowledge Base

    See you soon
    Sachin

  • How to compare the length of the data to a staging table with the definition of the base table

    Hello
    I have two tables: staging of the table and the base table.
    I get flatfiles data in the staging of the table, depending on the structure of the requirement of staging of the table and the base table (length of each column in the staging table is 25% more data dump without errors) are different for ex: If we have the city long varchar 40 column in table staging there 25 in the base table. Once data are discharged into the intermediate table that I want to compare the actual length of the data for each column in the staging table with the database table definition (data_length for each column of all_tab_columns) and if no column is different length that I need to update the corresponding line in the intermediate table which also has an indicator called err_length.

    so for that I use the cursor c1 is select length (a.id), length (b.SID) of staging_table;
    c2 (name varchar2) cursor is select data_length all_tab_columns where table_name = 'BASE_TABLE' and column_name = name;
    But we get atonce data in the first query while the second slider, I need to get for each column and then compare with the first?
    Can someone tell me how to get the desired results?

    Thank you
    Manoi.

    Hey, Marco.

    Of course, you can set src.err_length in the USING clause (where you can reference all_tab_columns) and use this value in the SET clause.
    It is:

    MERGE INTO  staging_table   dst
    USING  (
           WITH     got_lengths     AS
                     (
              SELECT  MAX (CASE WHEN column_name = 'ENAME' THEN data_length END)     AS ename_len
              ,     MAX (CASE WHEN column_name = 'JOB'   THEN data_length END)     AS job_len
              FROM     all_tab_columns
              WHERE     owner          = 'SCOTT'
              AND     table_name     = 'EMP'
              )
         SELECT     s.ename
         ,     s.job
         ,     CASE WHEN LENGTH (s.ename) > l.ename_len THEN 'ENAME ' END     ||
              CASE WHEN LENGTH (s.job)   > l.job_len   THEN 'JOB '   END     AS err_length
         FROM     staging_table     s
         JOIN     got_lengths     l     ON     LENGTH (s.ename)     > l.ename_len
                             OR     LENGTH (s.job)          > l.job_len
         )     src
    ON     (src.ename     = dst.ename)
    WHEN MATCHED THEN UPDATE
         SET     dst.err_length     = src.err_length
    ;
    

    As you can see, you have to hardcode the names of the columns common to several places. I swam () to simplify that, but I found an interesting (at least for me) alternative grouping function involving the STRAGG user_defined.
    As you can see, only the subquery USING is changed.

    MERGE INTO  staging_table   dst
    USING  (
           SELECT       s.ename
           ,       s.job
           ,       STRAGG (l.column_name)     AS err_length
           FROM       staging_table          s
           JOIN       all_tab_columns     l
          ON       l.data_length  < LENGTH ( CASE  l.column_name
                                              WHEN  'ENAME'
                                    THEN      ename
                                    WHEN  'JOB'
                                    THEN      job
                                       END
                               )
           WHERE     l.owner      = 'SCOTT'
           AND      l.table_name     = 'EMP'
           AND      l.data_type     = 'VARCHAR2'
           GROUP BY      s.ename
           ,           s.job
           )     src
    ON     (src.ename     = dst.ename)
    WHEN MATCHED THEN UPDATE
         SET     dst.err_length     = src.err_length
    ;
    

    Instead of the user-defined STRAGG (that you can copy from AskTom), you can also use the undocumented, or from Oracle 11.2, WM_CONCAT LISTAGG built-in function.

  • How to find the number of data items in a file written with the ArryToFile function?

    I wrote a table of number in 2 groups of columns in a file using LabWindows/CVI ArrayToFile... Now, if I want to read the file with the FileToArray function so how do I know the number of items in the file. during the time of writing, I know how many elements array to write. But assume that I want the file to be read at a later time, then how to find the number of items in the file, so that I can read the exact number and present it. Thank you all

    Hello

    I start with the second question:

    bytes_read = ReadLine (file_handle, line_buffer, maximum_bytes);

    the second argument is the buffer to store the characters read, so it's an array of characters; It must be large enough to hold maximum_bytes the value NULL, if char [maximum_butes + 1]

    So, obviously the number of lines in your text tiles can be determined in a loop:

    Open the file

    lines = 0;

    While (ReadLine () > 0)

    {

    lines ++;

    }

    Close the file

  • How to export data to excel that has 2 tables with the same number of columns and the column names?

    Hi everyone, yet once landed upward with a problem.

    After trying many things to myself, finally decided to post here...

    I created a form in form builder 6i in which clicking on a button, the data gets exported to the excel sheet.

    It works very well with a single table. The problem now is that I cannot do the same with 2 tables.

    Because the tables have the same number of columns and the columns names.

    Here are the 2 tables with column names:

    Table-1 (MONTHLY_PART_1) Table-2 (MONTHLY_PART_2)
    SL_NOSL_NO
    MODELMODEL
    END_DATEEND_DATE
    U-1U-1
    U-2U-2
    U-4U-4
    ..................
    ..................
    U-20U-20
    U-25U-25

    Given that the tables have the same column names, I get the following error :

    402 error at line 103, column 4

    required aliases in the SELECT list of the slider to avoid duplicate column names.

    So how to export data to excel that has 2 tables with the same number of columns and the column names?

    Should I paste the code? Should I publish this query in 'SQL and PL/SQL ' Forum?

    Help me with this please.

    Thank you.

    Wait a second... is this a kind of House of partitioning? Shouldn't it is a union of two tables instead a join?

    see you soon

  • Satellite A100-220: is it possible to extend with the WLan functionality?

    As in the title: is it possible to extend the Satellite A100-220 - without builtin WLAN - with the WLAN functionality?

    Hello

    It is not a problem to use the WLAN option on the unit without internal WLAN card. You can also use the little USB WLAN stick. It is very small and easy to configure. Before you buy something to pick up the info from your local dealer.

  • kindly tell how to use the unique value of a table with the index 0

    kindly tell how to use the unique value of a table with the index 0

    Hi
     
    Yep, use Index Array as Gerd says. Also, using the context help ( + h) and looking through the array palette will help you get an understanding of what each VI does.
     
    This is fundamental LabVIEW stuff, perhaps you'd be better spending some time going through the basics.
     
    -CC
  • LOV cascading if I don't have only one table with the customer name and the name of the product in the ADF.

    Hi Please help me how to use cascade, if I don't have only one table with the customer name and the name of the product in the ADF... I use Jdeveloper 11.1.

    For the client, I used customer VO with client list to fill but to populate the product that I use bind variable PrODUCT_NAME select distinct from TABLE where client_name =: bindCustomer

    so first of all, I need to set the variable of liaison on behalf of the selected customer.

    Can you please tell me how to set this variable binding in this case.

    After you set the LOV to your product attribute, correspondting VO in the LOV will appear under view accessors.

    Change the accessor of the view, you will see the variable binding. Set its value to the customer field of the parent object.

    Visit this link: https://www.youtube.com/watch?v=nXwL2_RP7AQ

    Kind regards

    Elias.

  • Problems with the Row_Number function

    I have problems with the Row_Number function. I use to assign line numbers to records where a student has a note of passage on a module and the exclusion of the modules failed (I want to show her a 0 as the line number for the modules failed). The problem is that when I try to use a condition, the report still assigns a line number to a defective module if it does not display it (it shows a 0 I wanted to show him). The results are displayed as follows:

    Line number Module Grade
    1ModuleAPass
    2ModuleBPass
    0ModuleCIn case of failure
    4 (instead of 3)ModuleDPass

    How can I make him jump to assign a line number to all the modules failed? Please help.

    Thank you.

    Thank you very much, Melanie. I made changes to query as per your suggestion, which is a union of the modules failed and passed (using row_number on success modules). Thanks for the solution.

  • Fill a table with the results of the refresh groups

    Hello world

    I need a little help.

    I'm working on an Oracle 10.2.0.4 on windows.

    I have a table I created like this:
    Table name: DIM_REPLICA

    COD_SEZ VCHAR2 (2)
    NOME_SEZ VCHAR2 (20)
    FLAG TANK (1)
    DATE OF D_REPLICA

    This DB I have 210 discount groups running every night. I need fill this table with the results of the refresh groups.

    So when the refresh for example called ROME group runs I need to write on the table the name ROME in the field "NOME_SEZ", a Y or N if the refresh Group has worked in the field of the INDICATOR and LAST_DATE refresh force ran into the field of the D_REPLICA. The COD_SEZ field is a code that I get other things. It is not necessary for the moment. I can add it myself on my own.

    Can someone help me please?

    I was looking on the tables SYS DBA_JOBS and DBA_REFRESH these data, but I don't know what to take and how to fill the table. Trigger? Procedure? Any help will be great!

    Thank you all in advance!

    This forum is for SQL * PLus, questions and your question is about general issues Oracle. You will get a better response by posting your question in another forum - probably the General database instance.

    Please close this thread and start over in another forum.

  • Problem with the GetParameter() function in IScript

    Hello

    I am facing a problem with the GetParameter() function in IScript. I created a URL below and appellant IScript

    GenerateScriptContentURL ("EMPLOYEE", "MFC", Record.WEBLIB_REPT_SJ, Field.ISCRIPT1, "FieldFormula", "IScript_GetAttachment"). "? FileName =' | & AttachUserFileURL;

    before generating URLS, I'm just encrypt the name of the ZIP file & assignment in the variable string & AttachUserFileURL I concatenated in link above.

    and try to take the value encrypted text of decryption by %Request.GetParameter("FileName") in IScript who isn't able to get special characters such as +, is.

    Please get this.

    Thank you

    Edited by: 936729 may 25, 2012 03:35

    + and = are allowed in URLS. You just have to URL encode like this EncodeURLForQueryString(&AttachUserFileURL) before adding them to your URL:

    GenerateScriptContentURL("EMPLOYEE", "HRMS", Record.WEBLIB_REPT_SJ, Field.ISCRIPT1, "FieldFormula", "IScript_GetAttachment") | "?FileName=" | EncodeURLForQueryString(&AttachUserFileURL);
    

    Here is the entrance of PeopleBook for EncodeURLForQueryString:

    http://docs.Oracle.com/CD/E28394_01/pt852pbh1/Eng/psbooks/TPCL/htm/tpcl02.htm#_6453b1b1_1355ab71343__503e

  • update to column values (false) in a copy of the same table with the correct values

    Database is 10gr 2 - had a situation last night where someone changed inadvertently values of column on a couple of hundred thousand records with an incorrect value first thing in the morning and never let me know later in the day. My undo retention was not large enough to create a copy of the table as it was 7 hours comes back with a "insert in table_2 select * from table_1 to timestamp...» "query, so I restored the backup previous nights to another machine and it picked up at 07:00 (just before the hour, he made the change), created a dblink since the production database and created a copy of the table of the restored database.

    My first thought was to simply update the table of production with the correct values of the correct copy, using something like this:


    Update mnt.workorders
    Set approvalstat = (select b.approvalstat
    mnt.workorders a, mnt.workorders_copy b
    where a.workordersoi = b.workordersoi)
    where exists (select *)
    mnt.workorders a, mnt.workorders_copy b
    where a.workordersoi = b.workordersoi)

    It wasn't the exact syntax, but you get the idea, I wanted to put the incorrect values in x columns in the tables of production with the correct values of the copy of the table of the restored backup. Anyway, it was (or seem to) works, but I look at the process through OEM it was estimated 100 + hours with full table scans, so I killed him. I found myself just inserting (copy) the lines added to the production since the table copy by doing a select statement of the production table where < col_with_datestamp > is > = 07:00, truncate the table of production, then re insert the rows from now to correct the copy.

    Do a post-mortem today, I replay the scenario on the copy that I restored, trying to figure out a cleaner, a quicker way to do it, if the need arise again. I went and randomly changed some values in a column number (called "comappstat") in a copy of the table of production, and then thought that I would try the following resets the values of the correct table:

    Update (select a.comappstat, b.comappstat
    mnt.workorders a, mnt.workorders_copy b
    where a.workordersoi = b.workordersoi - this is a PK column
    and a.comappstat! = b.comappstat)
    Set b.comappstat = a.comappstat

    Although I thought that the syntax is correct, I get an "ORA-00904: 'A'. '. ' COMAPPSTAT': invalid identifier ' to run this, I was trying to guess where the syntax was wrong here, then thought that perhaps having the subquery returns a single line would be cleaner and faster anyway, so I gave up on that and instead tried this:

    Update mnt.workorders_copy
    Set comappstat = (select distinct)
    a.comappstat
    mnt.workorders a, mnt.workorders_copy b
    where a.workordersoi = b.workordersoi
    and a.comappstat! = b.comappstat)
    where a.comappstat! = b.comappstat
    and a.workordersoi = b.workordersoi

    The subquery executed on its own returns a single value 9, which is the correct value of the column in the table of the prod, and I want to replace the incorrect a '12' (I've updated the copy to change the value of the column comappstat to 12 everywhere where it was 9) However when I run the query again I get this error :

    ERROR on line 8:
    ORA-00904: "B". "" WORKORDERSOI ": invalid identifier

    First of all, I don't see why the update statement does not work (it's probably obvious, but I'm not)

    Secondly, it is the best approach for updating a column (or columns) that are incorrect, with the columns in the same table which are correct, or is there a better way?

    I would sooner update the table rather than delete or truncate then re insert, as it was a trigger for insert/update I had to disable it on the notice re and truncate the table unusable a demand so I was re insert.

    Thank you

    Hello

    First of all, after post 79, you need to know how to format your code.

    Your last request reads as follows:

    UPDATE
      mnt.workorders_copy
    SET
      comappstat =
      (
        SELECT DISTINCT
          a.comappstat
        FROM
          mnt.workorders a
        , mnt.workorders_copy b
        WHERE
          a.workordersoi    = b.workordersoi
          AND a.comappstat != b.comappstat
      )
    WHERE
      a.comappstat      != b.comappstat
      AND a.workordersoi = b.workordersoi
    

    This will not work for several reasons:
    The sub query allows you to define a and b and outside the breakets you can't refer to a or b.
    There is no link between the mnt.workorders_copy and the the update and the request of void.

    If you do this you should have something like this:

    UPDATE
      mnt.workorders     A      -- THIS IS THE TABLE YOU WANT TO UPDATE
    SET
      A.comappstat =
      (
        SELECT
          B.comappstat
        FROM
          mnt.workorders_copy B   -- THIS IS THE TABLE WITH THE CORRECT (OLD) VALUES
        WHERE
          a.workordersoi    = b.workordersoi      -- THIS MUST BE THE KEY
          AND a.comappstat != b.comappstat
      )
    WHERE
      EXISTS
      (
        SELECT
          B.comappstat
        FROM
          mnt.workorders_copy B
        WHERE
          a.workordersoi    = b.workordersoi      -- THIS MUST BE THE KEY
          AND a.comappstat != b.comappstat
      )
    

    Speed is not so good that you run the query to sub for each row in mnt.workorders
    Note it is condition in where. You need other wise, you will update the unchanged to null values.

    I wouold do it like this:

    UPDATE
      (
        SELECT
          A.workordersoi
          ,A.comappstat
          ,B.comappstat           comappstat_OLD
    
        FROM
          mnt.workorders        A      -- THIS IS THE TABLE YOU WANT TO UPDATE
          ,mnt.workorders_copy  B      -- THIS IS THE TABLE WITH THE CORRECT (OLD) VALUES
    
        WHERE
          a.workordersoi    = b.workordersoi      -- THIS MUST BE THE KEY
          AND a.comappstat != b.comappstat
      ) C
    
    SET
      C.comappstat = comappstat_OLD
    ;
    

    This way you can test the subquery first and know exectly what will be updated.
    This was not a sub query that is executed for each line preformance should be better.

    Kind regards

    Peter

Maybe you are looking for