Cannot use analytical functions such as lag/lead in odi components 12 c except in the expression

Hi I am a beginner of ODI 12 c

I'm trying to get the last two comments made on the product for a given product id. and load them into a target.

I have a source table something like

Product SR_NO comments LAST_UPDATED_TS

1 good car 2015/05/15 08:30:25

1 car average 2015/05/15 10:30:25

Jeep 2 super 2015/05/15 11:30:25

1 car bad 2015/05/15 11:30:25

Jeep 2 horrible 2015/05/15 09:30:25

Jeep 2 excellent 2015/05/15 12:30:25


I want a target table based on their last timestamp updated as (last two comments)


SR_NO Comment1 Comment2

1                             bad                      average

2 super excellent

I used the logic below to get records in SQL Developer but in ODI 12 c, I'm not able to do this by mapping a source to the target table by applying analytical functions to the columns in the target table. Can someone help me solve this problem

SELECT * FROM)

SELECT SR_NO Comment1, LAG(Comment1,1,) ON Comment2 (SR_NO ORDER BY LAST_UPDATED_TS ASC PARTITION),

ROW_NUMBER() ON RN (SCORE FROM SR_NO ORDER BY LAST_UPDATED_TS DESC)

FROM Source_table

) M

WHERE RN = 1

;

UM, I'm afraid that ODI puts the filter too early in the request, if it generates:

SELECT * FROM)

SELECT SR_NO Comment1, LAG(Comment1,1,) ON Comment2 (SR_NO ORDER BY LAST_UPDATED_TS ASC PARTITION),

ROW_NUMBER() ON RN (SCORE FROM SR_NO ORDER BY LAST_UPDATED_TS DESC)

FROM Source_table

WHERE RN = 1

) M

;

Instead of:

SELECT * FROM)

SELECT SR_NO Comment1, LAG(Comment1,1,) ON Comment2 (SR_NO ORDER BY LAST_UPDATED_TS ASC PARTITION),

ROW_NUMBER() ON RN (SCORE FROM SR_NO ORDER BY LAST_UPDATED_TS DESC)

FROM Source_table

) M

WHERE RN = 1

;

Even by changing the 'run on Hint"of your component of the expression to get there on the source, the request will stay the same.

I think the easiest solution for you is to put everything before the filter in a reusable mapping with a signature of output. Then drag this reusable in your mapping as the new source and check the box "subselect enabled."

Your final mapping should look like this:

It will be useful.

Kind regards

JeromeFr

Tags: Business Intelligence

Similar Questions

  • date ranges - possible to use analytical functions?

    The following datastructure must be converted into a daterange datastructure.
    START_DATE END_DATE      AMMOUNT
    ---------- ---------- ----------
    01-01-2010 28-02-2010         10
    01-02-2010 31-03-2010         20
    01-03-2010 31-05-2010         30
    01-09-2010 31-12-2010         40
    Working solution:
    with date_ranges
    as   ( select to_date('01-01-2010','dd-mm-yyyy') start_date
           ,      to_date('28-02-2010','dd-mm-yyyy') end_date
           ,      10                                 ammount
           from   dual
           union all
           select to_date('01-02-2010','dd-mm-yyyy') start_date
           ,      to_date('31-03-2010','dd-mm-yyyy') end_date
           ,      20                                 ammount
           from   dual
           union all
           select to_date('01-03-2010','dd-mm-yyyy') start_date
           ,      to_date('31-05-2010','dd-mm-yyyy') end_date
           ,      30                                 ammount
           from   dual
           union all
           select to_date('01-09-2010','dd-mm-yyyy') start_date
           ,      to_date('31-12-2010','dd-mm-yyyy') end_date
           ,      40                                 ammount
           from   dual
          )
    select   rne.start_date
    ,        lead (rne.start_date-1,1)  over (order by rne.start_date) end_date
    ,        ( select sum(dre2.ammount)
               from   date_ranges dre2
               where  rne.start_date >= dre2.start_date
               and    rne.start_date <= dre2.end_date
             ) range_ammount
    from     ( select dre.start_date
               from   date_ranges dre
               union -- implicit distinct
               select dre.end_date + 1
               from   date_ranges dre
             ) rne
    order by rne.start_date
    /
    Output:
    START_DATE END_DATE   RANGE_AMMOUNT
    ---------- ---------- -------------
    01-01-2010 31-01-2010            10
    01-02-2010 28-02-2010            30
    01-03-2010 31-03-2010            50
    01-04-2010 31-05-2010            30
    01-06-2010 31-08-2010
    01-09-2010 31-12-2010            40
    01-01-2011
    
    7 rows selected.
    However, I would like to use an analytical function to calculate the range_ammount. Is this possible?

    Published by: user5909557 on July 29, 2010 06:19

    Hello

    Welcome to the forum!

    Yes, you can replace the scalar sub-queriy with a SUMMARY, like this:

    WITH  change_data   AS
    (
         SELECT     start_date     AS change_date
         ,     ammount          AS net_amount
         FROM     date_ranges
              --
        UNION
              --
         SELECT     end_date + 1     AS change_date
         ,     -ammount        AS net_amount
         FROM     date_ranges
    )
    ,     got_range_amount     AS
    (
         SELECT     change_date          AS start_date
         ,     LEAD (change_date) OVER (ORDER BY  change_date) - 1
                                     AS end_date
         ,     SUM (net_amount)   OVER (ORDER BY  change_date)
                                    AS range_amount
         FROM    change_data
    )
    ,     got_grp          AS
    (
         SELECT     start_date
         ,     end_date
         ,     range_amount
         ,     ROW_NUMBER () OVER ( ORDER BY        start_date, end_date)
               - ROW_NUMBER () OVER ( PARTITION BY  range_amount
                                         ORDER BY          start_date, end_date
                           )         AS grp
         FROM    got_range_amount
    )
    SELECT       MIN (start_date)     AS start_date
    ,       MAX (end_date)     AS end_date
    ,       range_amount
    FROM       got_grp
    GROUP BY  grp
    ,            range_amount
    ORDER BY  grp
    ;
    

    This should be much more effective.

    The code is longer than what you posted. It is largely because it includes consecutive groups with the same amount.
    For example, if you add this line the sample data:

    --
           union all
           select to_date('02-01-2010','dd-mm-yyyy') start_date
           ,      to_date('30-12-2010','dd-mm-yyyy') end_date
           ,      0                                 ammount
           from   dual
    

    The query that you posted the product:

    START_DAT END_DATE  RANGE_AMMOUNT
    --------- --------- -------------
    01-JAN-10 01-JAN-10            10
    02-JAN-10 31-JAN-10            10
    01-FEB-10 28-FEB-10            30
    01-MAR-10 31-MAR-10            50
    01-APR-10 31-MAY-10            30
    01-JUN-10 31-AUG-10             0
    01-SEP-10 30-DEC-10            40
    31-DEC-10 31-DEC-10            40
    01-JAN-11
    

    I suppose you want only a new production line where the changes of range_amount., it is:

    START_DAT END_DATE  RANGE_AMOUNT
    --------- --------- ------------
    01-JAN-10 31-JAN-10           10
    01-FEB-10 28-FEB-10           30
    01-MAR-10 31-MAR-10           50
    01-APR-10 31-MAY-10           30
    01-JUN-10 31-AUG-10            0
    01-SEP-10 31-DEC-10           40
    01-JAN-11                      0
    

    Of course, you can change the original query so that it did, but it would eventually just as complex as the above query, but less effective.
    Conversely, if you prefer the longer output, then you need not got_grp Tahina-query in the above query.

    Thanks for posting the CREATE TABLE and INSERT statements; It is very useful.
    There are people who use this forum for years and have yet to be begged to do.

  • the date of consolidation extends using analytic functions

    I am trying to establish how long a person was in a situation the data looks like this
    person Locator recorded_date
    --------------------------------------------------------
    01/01/2012 10:10 LOC_A PERSON_X
    03/01/2012 PERSON_X LOC_A 15:10
    04/01/2012 PERSON_X LOC_B 02:00
    05/01/2012 PERSON_X LOC_B 11:10
    06/01/2012 PERSON_X LOC_A 03:10

    What I want in the output. I want to divide it into 3 bays. What do I get with min and rank is a grouping of the last loc_a with the first that goes on the average time they were in a different location.
    Start anyone to date date stop
    -----------------------------------------------------------------------------------
    01/01/2012 10:10 01/04/2012 PERSON_X LOC_A 02:00
    04/01/2012 02:00 06/01/2012 PERSON_X LOC_B 03:10
    06/01/2012 PERSON_X LOC_A 03:10

    Hello

    DanU says:
    Thanks Frank! This was extremely helpful. I probably get the final stages. The only piece I am missing is having the end defined by the following date recorded date. So I might try your query with the lead function.

    Sorry, I don't have the sense of end_date.
    You're right; all you need is the analytical function of LEAD to get:

    WITH     got_grp          AS
    (
         SELECT     recorded_date, cqm_category, pat_acct_num
         ,     ROW_NUMBER () OVER ( PARTITION BY  pat_acct_num
                                   ORDER BY          recorded_date
                           )
               -     ROW_NUMBER () OVER ( PARTITION BY  pat_acct_num
                                         ,                    cqm_category
                                   ORDER BY          recorded_date
                           )         AS grp
         FROM    export_table
    )
    SELECT       MIN (recorded_date)               AS start_date
    ,       LEAD (MIN (recorded_date)) OVER ( PARTITION BY  pat_acct_num
                                                 ORDER BY         MIN (recorded_date)
                               )     AS stop_date
    ,       cqm_category
    ,       pat_acct_num
    FROM       got_grp
    GROUP BY  pat_acct_num, cqm_category, grp
    ORDER BY  pat_acct_num, start_date
    ;
    

    It's almost the same query as what I posted before.
    Apart from substituting your new table and column names, the only change I made was how stop_date is defined in the main query.

  • Merge no SQL using analytical functions

    Hi, the Sql tuning specialists:
    I have a question about the merger of view inline.

    I have a simple vision with the analytical functions inside. When questioning him, he does not index.


    VIEW to CREATE or REPLACE ttt
    AS
    SELECT EmpNo, deptno,
    ROW_NUMBER() over (PARTITION BY deptno ORDER BY deptno desc NULLS last) part_seq
    EMP AAA

    -That will do full table for emp scan
    Select * from TT
    WHERE empno = 7369


    -If I do not view use, I use the query directly, the index is used
    SELECT EmpNo, deptno,
    ROW_NUMBER() over (PARTITION BY deptno ORDER BY deptno desc NULLS last) part_seq
    EMP aaa
    WHERE empno = 7369


    question is: How can I force the first query to use indexes?

    Thank you

    MScallion wrote:
    What happens if you use the push_pred flag:

    Nothing will happen. And it would be a bug if he would.

    select * from ttt
    WHERE empno=7369
    

    and

    SELECT empno,deptno,
    row_number() OVER (PARTITION BY deptno ORDER BY deptno desc NULLS last) part_seq
    FROM emp aaa
    WHERE empno=7369
    

    are two logically different queries. Analytical functions are applied after + * resultset is common. So first select query all rows in the emp table then assign ROW_NUMBER() to recovered lines and only then select a line with empno = 7369 her. Second query will select the table emp with empno = 7369 line and only then apply ROW_NUMBER() - so since emp.empno is unique ROW_NUMBER returned by second query will always be equal to 1:

    SQL> select * from ttt
      2  WHERE empno=7369
      3  /
    
         EMPNO     DEPTNO   PART_SEQ
    ---------- ---------- ----------
          7369         20          4
    
    SQL> SELECT empno,deptno,
      2  row_number() OVER (PARTITION BY deptno ORDER BY deptno desc NULLS last) part_seq
      3  FROM emp aaa
      4  WHERE empno=7369
      5  /
    
         EMPNO     DEPTNO   PART_SEQ
    ---------- ---------- ----------
          7369         20          1
    
    SQL> 
    

    SY.

  • Need help to resolve the query by using analytic functions

    Hello

    I need help to solve this problem, I tried an analytical function but could not solve the problem.

    I have three table as illustrated below the table is filled with a flat file. The records are arranged sequentailly based on the name of the file.

    The first record of the game based on EIN goes to TAB_RCE
    the following records then goes to TAB_RCW
    and last save of the game based on EIN goes to the RCT table

    How can I make groups and
    assign a

    EIN * 12345 * line number * 02, 03, 04 * in the table TAB_RCW and * 05 * in the table TAB_RCT
    EIN * 67890 * line number * 07, 08, 09,10 * in the table TAB_RCW and * 11 * in the table TAB_RCT
    and so on...

    Thank you

    Rajesh

    TAB RCE_--------------------------------------------------------------
    LineNumber EIN FILENAME TYPE

    -----

    01 12345 ABC NCE. TXT
    06 67890 ABC NCE. TXT
    12 76777 ABC NCE. TXT

    -----
    TAB_RCW
    -----
    LineNumber TYPE SSN FILENAME
    -----
    02 22222 ABC RCW. TXT
    03 33333 ABC RCW. TXT
    04 44444 ABC RCW. TXT
    07 55555 ABC RCW. TXT
    08 66666 ABC RCW. TXT
    09 77777 ABC RCW. TXT
    10 88888 ABC RCW. TXT
    13 99998 ABC RCW. TXT
    14 99999 ABC RCW. TXT

    -----
    TAB_RCT
    -----
    NAME OF THE FILE OF TYPE LINENUMBER
    -----
    RCT 05 ABC. TXT
    RCT 11 ABC. TXT
    RCT 15 ABC. TXT
    -----
    SQL> with TAB_RCE as (
      2                   select 'RCE' rtype,'01' linenumber, '12345' EIN,'ABC.TXT' FILENAME from dual union all
      3                   select 'RCE','06','67890','ABC.TXT' from dual union all
      4                   select 'RCE','12','76777','ABC.TXT' from dual
      5                  ),
      6       TAB_RCW as (
      7                   select 'RCW' rtype,'02' linenumber,'22222' ssn,'ABC.TXT' FILENAME from dual union all
      8                   select 'RCW','03','33333','ABC.TXT' from dual union all
      9                   select 'RCW','04','44444','ABC.TXT' from dual union all
     10                   select 'RCW','07','55555','ABC.TXT' from dual union all
     11                   select 'RCW','08','66666','ABC.TXT' from dual union all
     12                   select 'RCW','09','77777','ABC.TXT' from dual union all
     13                   select 'RCW','10','88888','ABC.TXT' from dual union all
     14                   select 'RCW','13','99998','ABC.TXT' from dual union all
     15                   select 'RCW','14','99999','ABC.TXT' from dual
     16                  ),
     17       TAB_RCT as (
     18                   select 'RCT' rtype,'05' linenumber,'ABC.TXT' FILENAME from dual union all
     19                   select 'RCT','11','ABC.TXT' from dual union all
     20                   select 'RCT','15','ABC.TXT' from dual
     21                  )
     22  select  rtype,
     23          last_value(ein ignore nulls) over(partition by filename order by linenumber) ein,
     24          linenumber,
     25          ssn
     26    from  (
     27            select  rtype,
     28                    linenumber,
     29                    ein,
     30                    to_char(null) ssn,
     31                    filename
     32              from  TAB_RCE
     33            union all
     34            select  rtype,
     35                    linenumber,
     36                    to_char(null) ein,
     37                    ssn,
     38                    filename
     39              from  TAB_RCW
     40            union all
     41            select  rtype,
     42                    linenumber,
     43                    to_char(null) ein,
     44                    to_char(null) ssn,
     45                    filename
     46              from  TAB_RCt
     47          )
     48    order by linenumber
     49  /
    
    RTY EIN   LI SSN
    --- ----- -- -----
    RCE 12345 01
    RCW 12345 02 22222
    RCW 12345 03 33333
    RCW 12345 04 44444
    RCT 12345 05
    RCE 67890 06
    RCW 67890 07 55555
    RCW 67890 08 66666
    RCW 67890 09 77777
    RCW 67890 10 88888
    RCT 67890 11
    
    RTY EIN   LI SSN
    --- ----- -- -----
    RCE 76777 12
    RCW 76777 13 99998
    RCW 76777 14 99999
    RCT 76777 15
    
    15 rows selected.
    
    SQL> 
    

    SY.

  • Cannot use fax function after rename the name of the printer

    I have an MFP HP LaserJet M1536DNF connected by network.  I have run the full setup package by the CD of the product, or on the web, complete update install the driver package.  The problem is the default printer name after installation as like "HP LaserJet 1530 MPF Series PCL 6" and can not be changed through the installation.  When I tried to change the printer name on the windows 'Printers and faxes', he managed but the fax doesn't work anymore after that.  It says probably 'impossible to find or connect the printer' after I click the fax or print the document to the fax printer.  However, I can still use the printing function after the name change.  The only way to solve now is to uninstall all and reinstall again, even I tried to rename back the name exactly for the name of the printer.  I use Window XP Pro SP3 with .net Framework 3.5 already up-to-date.  Thanks for the tips.

    Hi Cheetah12,

    Sorry to misunderstand.  I mean there is no option to not rename the printer when you run the complete package installation wizardfirst, but not the fax configuration wizard.

    In fact, I must mention once again that I have can print and scan even I renamed the printer.  Things I can't do the HP Send Fax or fax HP Setup Assistant and so, which means my fax machine was not working properly.

    Therefore, I follow the document from HP that you post step by step and of course the diagnosis by downloaded HP Print result and doctor Scan for both printing and scanning are in good health, all in green light!  But cannot always solve the problem of the fax.

    However, I finally found my way to solve the problem, which is just rename back the original printer name in the registry.  As a result, the printer in "Printers and faxes" which always shows the famous name, but actually running with the original name of the printer.

    In any case, thank you for your kindly help and fast track.  But I suggest there should be a direct way, like having an option to rename the name of the printer while the first installation wizard on your next updated version.  This will certainly help users who have to install 2 or more of the same model printer, to indicate which printer works easily.

  • Cannot use "declare function" in statement XMLQuery

    Hi all

    I'm using Oracle 11 g. In a SQL * more terminal, if I go home, said:
    -----
    Select XMLQUERY (')
    1
    '
    by the way (XMLTYPE ("& lt; dummy / & gt ;')) content return) twice;
    -----
    I get, as expected, '1' as a response. Now, if instead, I enter the following code:
    -----
    Select XMLQUERY (')
    declare function local: one ($arg) {}
    1
    };
    local: One (.)
    '
    by the way (XMLTYPE ("& lt; dummy / & gt ;')) content return) twice;
    -----
    I get instead:
    ERROR: ORA-01756: string not properly completed

    Entering the line-by-line statement, I see that this error is raised when you reach the "}; I don't know that this XQuery statement is correct (I have validated it in OxygenXML). What I am doing wrong?

    Thanks for your help!

    Add

    (: :)
    

    After};
    semicolon is the sql terminator.

  • How to prioritize the query result using analytic functions

    Hello

    Published by: prakash on May 20, 2013 01:42

    Use ROW_NUMBER

    SQL> select PRVDR_LCTN_X_SPCLTY_SID,PRVDR_LCTN_IID,PRVDR_TYPE_X_SPCLTY_SID,STATUS_CID
      2  from
      3  (
      4    select t.*,
      5      row_number() over(partition by PRVDR_TYPE_X_SPCLTY_SID
      6                        order by STATUS_CID) rn
      7    from your_table t
      8  )
      9  where rn = 1;
    
    PRVDR_LCTN_X_SPCLTY_SID PRVDR_LCTN_IID PRVDR_TYPE_X_SPCLTY_SID STATUS_CID
    ----------------------- -------------- ----------------------- ----------
                   75292110       10153920                75004770          1
                   75291888       10153920                75004884          2
                   75292112       10153920                75004916          1
                   75292117       10153920                75004974          1
    
  • Choose the platform correct blackberry 10 for the use of functions such as call history, use of data, message records

    I want to design a blackberry 10 application that can fetch all call logs, data internate, messages through the application usage. Please help me choose the right platform that can provide all the APIs that are required for this application. 10 webworks Blackberry and Android runtime dosen't supports all API.

    Thank you.

    You can watch here

    https://developer.BlackBerry.com/native/documentation/Cascades/device_platform/invocation/invoking_c...

  • Cannot use my Vista to play older pc games (Fly! a 1999 version of the Terminal Velocity)

    Hello, I have not been able to play a PC game called: fly! Terminal velocity, a 1999 edition, I tried many things by other sites as sugested: properties ckliking on the icon of the game and the Compatibility tab and switch to compatibility mode for older versions of windowas and also to change the levels of adjustment and or privileges, but not always working do not and that's what he said : MICROSOFT WINDOWS: FLY! AT STPPOED A LABOR PROBLEM CAUSES THE PROGRAM STOP WORKING WINDOWS CLOSES THE PROGRAM AND THE NOTITY YOU IF A SOLUTION IS AVAILABLE.

    I don't see any notification not where any ways the thing is that I could not play this game can someone guide me in what else to do?

    Some older games will not play on Vista no matter what you do. Some older games will not play on Windows 7 either. Some games that play on Vista won't play on Windows 7 etc etc.

  • How to store the XML Fragments using functions such as XMLElement

    Hello

    Don't know what I'm missing. I want to store XML fragments in variables so that can move around and concatenate it with the other fragments to make the final XML document. Even before trying to concatenate XML fragments, I fight to store any XML fragment in a variable. I try to use simple functions such as XMLElement to generate XML data can be stored in the variable. I've seen many examples of XMLElement in SQL select statement. XMLElement is usable in plsql? XMLElement said she returned the value of type XMLType. Functions such as XMLElement to generate XML easy to create all the tags manually.

    Here is a simple example that illustrates what I would do. I wish I could pass the XML as clob or XMLType fragment. I get error saying PLS-00201: identifier "XMLELEMENT" must be declared

    declare
    XMLType vTheData;
    CLOB vTheDataClob;

    Start
    vTheData: = XMLelement("empno",'1234567');
    vTheDataClob: = xmlelement("empno",'1234567').getclobval ();
    end;

    Wherever I can use below, but certainly do not use select double method of. I just can use the XMLElement function in plsql even than sql, like most of the other functions for example length, rtrim etc.

    declare
    XMLType vTheData;
    CLOB vTheDataClob;

    Start
    Select xmlelement("empno",'1234567')
    in vTheData
    Double;

    Select xmlelement("empno",'1234567').getclobval)
    in vTheDataClob
    Double;
    end;

    Hope it makes sense.

    Thank you

    That said, is there a more elegant way to achieve below. It is fill two XML fragments and concatenate together.

    Of course, why not just a single statement?

    Select XMLConcat)

    XMLElement (...)

    XMLElement (...)

    )

    in vTheResult

    Double;

    As is a second question, better to build and send XML as clob or XMLType fragments?

    I would say that staying with XMLType, but it depends on your condition.

    I generally avoid passing around data elements, SQL/XML functions are powerful in the way that they can be used with set operations, until their use in a piecemeal approach some deny what they are for.

  • Using the analytic function

    Oracle 11g Release 2

    I'm assuming that the best solution is the use of analytical functions.

    create table test3
    ( part_type_id  varchar2(50)
    ,group_id      number
    ,part_desc_id  number
    ,part_cmt      varchar2(50)
    )
    /
    
    insert into test3 values( 'ABC123',1,10,'comment1');
    insert into test3 values( 'ABC123',1,10,'comment2');
    insert into test3 values( 'ABC123',2,15,'comment1');
    insert into test3 values( 'ABC123',2,15,'comment2');
    insert into test3 values( 'EFG123',25,75,'comment3');
    insert into test3 values( 'EFG123',25,75,'comment4');
    insert into test3 values( 'EFG123',25,75,'comment5');
    insert into test3 values( 'XYZ123',1,10,'comment6');
    insert into test3 values( 'XYZ123',2,15,'comment7');
    commit;
    
    select * from test3;
    
    PART_TYPE_ID           GROUP_ID PART_DESC_ID PART_CMT
    -------------------- ---------- ------------ --------------------
    ABC123                        1           10 comment1
    ABC123                        1           10 comment2
    ABC123                        2           15 comment1
    ABC123                        2           15 comment2
    EDG123                        25          75 comment3
    EDG123                        25          75 comment4
    EDG123                        25          75 comment5
    XYZ123                        1           10 comment6
    XYZ123                        2           15 comment7
    
    9 rows selected.
    
    Desired output:
    
    PART_TYPE_ID           GROUP_ID PART_DESC_ID PART_CMT
    -------------------- ---------- ------------ --------------------
    ABC123                        1           10 comment1 
    ABC123                        2           15 comment1
    XYZ123                        1           10 comment1
    XYZ123                        2           15 comment2
    
    RULE: where one part_type_id has multiple (2 or more distinct combinations) of group_id/part_desc_id
    
    NOTE: There are about 12 columns in the table, for brevity I only included 4.
    
    
    
    

    Post edited by: orclrunner was updated desired output and rule

    Hello

    Here's one way:

    WITH got_d_count AS

    (

    SELECT part_type_id, group_id, part_desc_id

    MIN (part_cmt) AS min_part_cmt

    COUNT AS d_count (*) OVER (PARTITION BY part_type_id)

    OF test3

    GROUP BY part_type_id, group_id, part_desc_id

    )

    SELECT DISTINCT

    group_id, part_desc_id, part_type_id, min_part_cmt

    OF got_d_count

    WHERE d_count > 1

    ;

    Output:

    GROUP_ID PART_DESC_ID MIN_PART_CMT PART_TYPE_ID

    ------------ ---------- ------------ ------------

    ABC123 1 10 comment1

    ABC123 2 15 comment1

    XYZ123 1 10 comment6

    XYZ123 2 15 comment7

    Analytical functions, such as the COUNTY and MIN, many global versions, in addition, it can give the same results.  Use the analytical versions when each row of output corresponds to exactly 1 row of input and the aggregate and GROUP BY version when each line of output corresponds to a group of lines 1 or more input.  In this issue, each line of output appears to be a group of input lines having the same group_id, part_type_id, and part_desc_id (I'm guessing just, this only has never stated), so I used GROUP BY to get 1 row of output for every input lines.

  • by using the analytical function to get the right output.

    Hello all;

    I have the following date of sample below
    create table temp_one
    (
           id number(30),   
          placeid varchar2(400),
          issuedate  date,
          person varchar2(400),
          failures number(30),
          primary key(id)
    );
    
    insert into temp_one values (1, 'NY', to_date('03/04/2011', 'MM/DD/YYYY'), 'John', 3);
    
    insert into temp_one values (2, 'NY', to_date('03/03/2011', 'MM/DD/YYYY'), 'Adam', 7);
    
    insert into temp_one values (3, 'Mexico', to_date('03/04/2011', 'MM/DD/YYYY'), 'Wendy', 3);
    
    insert into temp_one values (4, 'Mexico', to_date('03/14/2011', 'MM/DD/YYYY'), 'Gerry', 3);
    
    insert into temp_one values (5, 'Mexico', to_date('03/15/2011', 'MM/DD/YYYY'), 'Zick', 9);
    
    insert into temp_one values (6, 'London', to_date('03/16/2011', 'MM/DD/YYYY'), 'Mike', 8);
    It's the output I want
    placeid       issueperiod                               failures
    NY              02/28/2011 - 03/06/2011          10
    Mexico       02/28/2011 - 03/06/2011           3
    Mexico        03/14/2011 - 03/20/2011          12
    London        03/14/2011 - 03/20/2011          8
    Any help is appreciated. I'll post my request as soon as I can think of a good logic for this...

    Hello

    user13328581 wrote:
    ... Please note, I'm still learning how to use analytical functions.

    It doesn't matter; analytical functions will not help in this problem. The SUM aggregate function is all you need.
    But what do you need to GROUP BY? What is the value of each row of the result will represent? A placeid? Yes, each line will represent only placedid, but it will be divided further. You want a separate line of the output for each placeid and every week, then you'll want of the week and GROUP BY placeid. You don't want to GROUP BY the raw issuedate; that would put on 3 March and 4 March in separate groups. And you don't want to GROUP BY failures; This would mean that a line with 3 failures could never be in the same group in line with 9 failures.

    This becomes the output you posted from the sample data you posted:

    SELECT       placeid
    ,             TO_CHAR ( TRUNC (issuedate, 'IW')
                  , 'MM/DD/YYYY'
                ) || ' - '|| TO_CHAR ( TRUNC (issuedate, 'IW') + 6
                                             , 'MM/DD/YYY'
                               )     AS issueperiod
    ,       SUM (failures)                  AS sumfailures
    FROM        temp_one
    GROUP BY  placeid
    ,            TRUNC (issuedate, 'IW')
    ;
    

    You can use a subquery to calculate TRUNC (issuedate, 'IW') once. The code would be of about as complicated, efficiency probably will not improve substantially and the results would be the same.

  • Purpose of the ORDER BY clause in the analytic function Min Max

    I was always using analytical functions like Min Max without ORDER BY clause. But today I used with the ORDER BY clause. The results are very different. I would like to know the purpose of the ORDER BY clause in Min, Max and analogues of analytical functions.

    user10566312 wrote:
    I was always using analytical functions like Min Max without ORDER BY clause. But today I used with the ORDER BY clause. The results are very different. I would like to know the purpose of the ORDER BY clause in Min, Max and analogues of analytical functions.

    It is a good point that many developers are not so aware. As far as I understand it the way it works.

    Some analytical functions do not need an order by or windowing clause (SUM, COUNT, MIN, etc.). If there is no specified window, then the full score is the window.
    As soon as you add a command also add you a windowing clause. This window has the default value of 'rank ofrows between unbounded preceding and current_row. So as soon as you add an order by clause, you get a sliding window.

    Documentation: http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions001.htm

    windowing_clause
    ...
    You cannot specify this clause unless you specified the order_by_clause. Window limits defined by the clause RANGE you can not specify only a single expression to the > order_by_clause. Please refer to 'Restrictions on the ORDER BY Clause'.

    example of

    with testdata as (select 10 numval, level lv from dual connect by level < 10)
    select lv, numval, sum(numval) over () sum1, sum(numval) over (order by lv) sum2
    from testdata;
    
    LV NUMVAL SUM1 SUM2
    -- ------ ---- ----
     1     10   90   10
     2     10   90   20
     3     10   90   30
     4     10   90   40
     5     10   90   50
     6     10   90   60
     7     10   90   70
     8     10   90   80
     9     10   90   90 
    

    Published by: Sven w. on 25 Sep 2012 16:57 - default behavior has been corrected. Thanks to Chris

  • Helps the analytic function

    Here is an example of the table data:
    ID    NAME             Start                  
    1     SARA             01-JAN-2006     
    2     SARA             03-FEB-2006     
    3     LAMBDA             21-MAR-2006     
    4     SARA             13-APR-2006     
    5     LAMBDA             01-JAN-2007     
    6     LAMBDA             01-SEP-2007     
    I would get this:
    Name        Start               Stop
    SARA        01-JAN-2006    20-MAR-2006
    LAMBDA      21-MAR-2006     12-APR-2006
    SARA        13-APR-2006     31-DEC-2006
    LAMBDA      01-JAN-2007      <null>
    I tried using partition and run the function but partition name combines all the lines of Sara and Lambda lines into a single group/partition that is not I am trying to get.
    Is there an analytic function or other means to achieve to combine date ranges only when the same person appeared conescutively?
    Thank you.

    This can be easily achieved using tabibitosan:

    First of all, you need to identify 'groups', that each name in the list belongs

    with sample_data as (select 1 id, 'SARA' name, to_date('01/01/2006', 'dd/mm/yyyy') start_date from dual union all
                         select 2 id, 'SARA' name, to_date('03/02/2006', 'dd/mm/yyyy') start_date from dual union all
                         select 3 id, 'LAMBDA' name, to_date('21/03/2006', 'dd/mm/yyyy') start_date from dual union all
                         select 4 id, 'SARA' name, to_date('13/04/2006', 'dd/mm/yyyy') start_date from dual union all
                         select 5 id, 'LAMBDA' name, to_date('01/01/2007', 'dd/mm/yyyy') start_date from dual union all
                         select 6 id, 'LAMBDA' name, to_date('01/09/2007', 'dd/mm/yyyy') start_date from dual)
    select id,
           name,
           start_date,
           lead(start_date, 1, to_date('31/12/9999', 'dd/mm/yyyy')) over (order by start_date) next_start_date,
           row_number() over (order by start_date)
             - row_number() over (partition by name order by start_date) grp
    from   sample_data;
    
            ID NAME   START_DATE NEXT_START_DATE        GRP
    ---------- ------ ---------- --------------- ----------
             1 SARA   01/01/2006 03/02/2006               0
             2 SARA   03/02/2006 21/03/2006               0
             3 LAMBDA 21/03/2006 13/04/2006               2
             4 SARA   13/04/2006 01/01/2007               1
             5 LAMBDA 01/01/2007 01/09/2007               3
             6 LAMBDA 01/09/2007 31/12/9999               3
    

    You can see the group number is generated by comparing the rownumber overall of all lines (in order) with the rownumber of the rowset by name (in the same order) - when there is a gap because another name appears between the two, the group number changes.

    Once you have identified the number of group for each set of rows, it is easy to find the min / max values in this group:

    
    with sample_data as (select 1 id, 'SARA' name, to_date('01/01/2006', 'dd/mm/yyyy') start_date from dual union all
                         select 2 id, 'SARA' name, to_date('03/02/2006', 'dd/mm/yyyy') start_date from dual union all
                         select 3 id, 'LAMBDA' name, to_date('21/03/2006', 'dd/mm/yyyy') start_date from dual union all
                         select 4 id, 'SARA' name, to_date('13/04/2006', 'dd/mm/yyyy') start_date from dual union all
                         select 5 id, 'LAMBDA' name, to_date('01/01/2007', 'dd/mm/yyyy') start_date from dual union all
                         select 6 id, 'LAMBDA' name, to_date('01/09/2007', 'dd/mm/yyyy') start_date from dual),
         tabibitosan as (select id,
                                name,
                                start_date,
                                lead(start_date, 1, to_date('31/12/9999', 'dd/mm/yyyy')) over (order by start_date) next_start_date,
                                row_number() over (order by start_date)
                                  - row_number() over (partition by name order by start_date) grp
                         from   sample_data)
    select name,
           min(start_date) start_date,
           max(next_start_date) stop_date
    from   tabibitosan
    group by name, grp
    order by start_date;
    
    NAME   START_DATE STOP_DATE
    ------ ---------- ----------
    SARA   01/01/2006 21/03/2006
    LAMBDA 21/03/2006 13/04/2006
    SARA   13/04/2006 01/01/2007
    LAMBDA 01/01/2007 31/12/9999
    

    If you want the date to appear as null max, you will need to use a cast or decode to change it - I'll leave that as an exercise for you to do! I'll also let you to find how to get the day before for the stop_date.

Maybe you are looking for

  • right-click systematically Mail.app crash

    Well Yes, impossible to do a right click on the name of the people to copy the email (small wheel), or within an e-mail to indent, for example (spinning wheel) or an e-mail (immediate crash). Anyone has any idea of what could happen?

  • iMatch

    I just got Itunes game. To my dismay, I discovered that a lot of my songs have doubled, even tripled. How to solve this problem? I did something wrong?

  • You can not post a question here!

    I've been posting and get useful answers, in these Discussions for many years. Today, I posted a technical question about my iPhone, and instead of confirmation of his assignment, I received this error message: You are not allowed to create or update

  • I just came to the new current version that does not support toolbar Google... lost all my favorites Googl and really piss me off the power! How can I switch back?

    To summarize, when I switched to this new version, I lost my Google toolbar that also included the different bookmarks for bookmarks in the toolbar at the top, or Firefox bookmarks. The two reflect each other and so I lost vital bookmarks that I had

  • My Mini crash and needs several reboots

    Hi people, Reliability of my Mini declined over a period of a few months.  It crashes on average two to three times a week. The accident often happens while watching YouTube video and without any warning, but accidents also occur while I use it, and