Help sql analytic function

Table with 2 columns pro_id, sub_ver_id need only 5 pro_id for each sub_ver_id.

SQL > select * from test1 by SUB_VER_ID;

PRO_ID SUB_VER_ID
---------- ----------
1 0
2 0
3 0
4 0
5 0
6 0
10 1
15 1
16 1
11 1
1 of 12

PRO_ID SUB_VER_ID
---------- ----------
13 1
1 of 14
11 2
3 of 12

.............................

I'm new to the analytical function, I received the request in the form below, but not able to get an idea to limit the SRLNO to only 5 lines for each SUB_VER_ID. Any advice would be much appreciated.

Select distinct sub_ver_id, pro_id, row_number () over (order by sub_ver_id) srlno
from test1 by sub_ver_id

Can be as below...

select *
from
(
select sub_ver_id,pro_id, row_number () over (partition by sub_ver_id order by null) srlno
from test1
) where srlno <=5 order by sub_ver_id

Thank you...

Tags: Database

Similar Questions

  • Helps the analytic function

    Here is an example of the table data:
    ID    NAME             Start                  
    1     SARA             01-JAN-2006     
    2     SARA             03-FEB-2006     
    3     LAMBDA             21-MAR-2006     
    4     SARA             13-APR-2006     
    5     LAMBDA             01-JAN-2007     
    6     LAMBDA             01-SEP-2007     
    I would get this:
    Name        Start               Stop
    SARA        01-JAN-2006    20-MAR-2006
    LAMBDA      21-MAR-2006     12-APR-2006
    SARA        13-APR-2006     31-DEC-2006
    LAMBDA      01-JAN-2007      <null>
    I tried using partition and run the function but partition name combines all the lines of Sara and Lambda lines into a single group/partition that is not I am trying to get.
    Is there an analytic function or other means to achieve to combine date ranges only when the same person appeared conescutively?
    Thank you.

    This can be easily achieved using tabibitosan:

    First of all, you need to identify 'groups', that each name in the list belongs

    with sample_data as (select 1 id, 'SARA' name, to_date('01/01/2006', 'dd/mm/yyyy') start_date from dual union all
                         select 2 id, 'SARA' name, to_date('03/02/2006', 'dd/mm/yyyy') start_date from dual union all
                         select 3 id, 'LAMBDA' name, to_date('21/03/2006', 'dd/mm/yyyy') start_date from dual union all
                         select 4 id, 'SARA' name, to_date('13/04/2006', 'dd/mm/yyyy') start_date from dual union all
                         select 5 id, 'LAMBDA' name, to_date('01/01/2007', 'dd/mm/yyyy') start_date from dual union all
                         select 6 id, 'LAMBDA' name, to_date('01/09/2007', 'dd/mm/yyyy') start_date from dual)
    select id,
           name,
           start_date,
           lead(start_date, 1, to_date('31/12/9999', 'dd/mm/yyyy')) over (order by start_date) next_start_date,
           row_number() over (order by start_date)
             - row_number() over (partition by name order by start_date) grp
    from   sample_data;
    
            ID NAME   START_DATE NEXT_START_DATE        GRP
    ---------- ------ ---------- --------------- ----------
             1 SARA   01/01/2006 03/02/2006               0
             2 SARA   03/02/2006 21/03/2006               0
             3 LAMBDA 21/03/2006 13/04/2006               2
             4 SARA   13/04/2006 01/01/2007               1
             5 LAMBDA 01/01/2007 01/09/2007               3
             6 LAMBDA 01/09/2007 31/12/9999               3
    

    You can see the group number is generated by comparing the rownumber overall of all lines (in order) with the rownumber of the rowset by name (in the same order) - when there is a gap because another name appears between the two, the group number changes.

    Once you have identified the number of group for each set of rows, it is easy to find the min / max values in this group:

    
    with sample_data as (select 1 id, 'SARA' name, to_date('01/01/2006', 'dd/mm/yyyy') start_date from dual union all
                         select 2 id, 'SARA' name, to_date('03/02/2006', 'dd/mm/yyyy') start_date from dual union all
                         select 3 id, 'LAMBDA' name, to_date('21/03/2006', 'dd/mm/yyyy') start_date from dual union all
                         select 4 id, 'SARA' name, to_date('13/04/2006', 'dd/mm/yyyy') start_date from dual union all
                         select 5 id, 'LAMBDA' name, to_date('01/01/2007', 'dd/mm/yyyy') start_date from dual union all
                         select 6 id, 'LAMBDA' name, to_date('01/09/2007', 'dd/mm/yyyy') start_date from dual),
         tabibitosan as (select id,
                                name,
                                start_date,
                                lead(start_date, 1, to_date('31/12/9999', 'dd/mm/yyyy')) over (order by start_date) next_start_date,
                                row_number() over (order by start_date)
                                  - row_number() over (partition by name order by start_date) grp
                         from   sample_data)
    select name,
           min(start_date) start_date,
           max(next_start_date) stop_date
    from   tabibitosan
    group by name, grp
    order by start_date;
    
    NAME   START_DATE STOP_DATE
    ------ ---------- ----------
    SARA   01/01/2006 21/03/2006
    LAMBDA 21/03/2006 13/04/2006
    SARA   13/04/2006 01/01/2007
    LAMBDA 01/01/2007 31/12/9999
    

    If you want the date to appear as null max, you will need to use a cast or decode to change it - I'll leave that as an exercise for you to do! I'll also let you to find how to get the day before for the stop_date.

  • With the help of analytical functions

    Hi all

    I'm using ODI 11 g (11.1.1.3.0) and I'm doing an interface using analytical functions in the column map, something like below.
    Salary on (partition of...)

    The problem is that when ODI saw the sum he considers this an aggregate function and the group. Is it possible to understand that it is not an aggregate in ODI function?


    I tried to create an option to specify whether it is analytic, then updated IKM with no luck.
    < % if (odiRef.getUserExit("ANALYTIC").equals("1")) {% >}
    < %} else {% >}
    < % = odiRef.getGrpBy (i) % >
    < % = odiRef.getHaving (i) % >
    < %} % >

    Thanks in advance

    Seth,

    Try this thing posted by Uli:
    http://www.business-intelligence-quotient.com/?p=905

  • More help with analytical functions

    I had great hellp here yesterday and I need once more today. I guess I'm still not able to get a solid understanding of analytical functions. So here's the problem:
    table with 3 collars:
    product_id (int), sale_date (to date), count_sold (int) - each file show that the number of items have been sold for the product at a given date.
    The query should return the 3 passes of the table AND a fourth column that contains the date with the best sales of the product. If there are two or more dates with equal sales, the last being is chosen.

    Is this possible using an analytical function appropriately and without using a subquery?

    example:
    product_id, sale_date, count_sold, high_sales_date
    1, 01-01-2008, 10, 05/10/2008,.
    1, 2008-03-10, 20, 10/05/2008
    1, 10/04/2008, 25, 05/10/2008
    1, 10/05/2008, 25, 05/10/2008
    1, 01/06/2008, 22, 05/10/2008
    2, 05/12/2008, 12, 05/12/2008
    2, 06/01/2009, 10, 05/12/2008

    Thank you

    Hello

    Try this:

    SELECT     product_id
    ,     sale_date
    ,     count_sold
    ,     FIRST_VALUE (sale_date) OVER ( PARTITION BY  product_id
                                   ORDER BY          count_sold          DESC
                               ,               sale_date          DESC
                             )      AS high_sales_date
    FROM     table_x;
    

    If you would post INSERT statements for your data, then I could test it.

    Focus issue: Why use FIRST_VALUE with descending order and not LAST_VALUE (ASCending) ORDER of default?

  • Using SQL analytic functions

    Thanks in advance to anyone who might help

    I am writing a storage report. Try to calculate the use of space over a period of time. I need for a db_nm, tblsp_nm find the collections of the first and the last, and then find the difference in the space_used

    The structure of the table is like this
    drop table tstg
    /
    create table tstg (db_nm varchar(10),
                      tblsp_nm varchar(15),
                      space_used number,
                      collection_time date)
    /
    insert into tstg values ( 'EDW', 'SYSTEM',100,to_date('01/07/2011','DD/MM/YYYY'));
    insert into tstg values ( 'EDW', 'SYSTEM',120,to_date('05/07/2011','DD/MM/YYYY'));
    insert into tstg values ( 'EDW', 'SYSTEM',150,to_date('10/07/2011','DD/MM/YYYY'));
    insert into tstg values ( 'EDW', 'SYSAUX',10,to_date('01/07/2011','DD/MM/YYYY'));
    insert into tstg values ( 'EDW', 'SYSAUX',12,to_date('05/07/2011','DD/MM/YYYY'));
    insert into tstg values ( 'EDW', 'SYSAUX',15,to_date('10/07/2011','DD/MM/YYYY'));
    commit;
    Expected result is
    DB_NM      TBLSP_NM        SPACE_USED COLLECTIO       DIFF
    ---------- --------------- ---------- --------- ----------
    EDW        SYSAUX                  15      10-JUL-11          5
    EDW        SYSTEM                 150      10-JUL-11         50
    I use
    select db_nm,tblsp_nm,space_used,collection_time,
    last_value(space_used) OVER (partition by DB_NM,Tblsp_nm order by collection_time ASC) -
    first_value(space_used) OVER (partition by DB_NM,Tblsp_nm order by collection_time ASC) diff
    from
    tstg
    but gives more lines in the result I want
    DB_NM      TBLSP_NM        SPACE_USED COLLECTIO       DIFF
    ---------- --------------- ---------- --------- ----------
    EDW        SYSAUX                  10      01-JUL-11          0
    EDW        SYSAUX                  12      05-JUL-11          2
    EDW        SYSAUX                  15      10-JUL-11          5
    EDW        SYSTEM                 100      01-JUL-11          0
    EDW        SYSTEM                 120      05-JUL-11         20
    EDW        SYSTEM                 150      10-JUL-11         50
    /


    Thank you
    Eduardo

    Hello

    Thanks for the sample data.

    Here's a solution using the FIRST/LAST functions:

    select db_nm
         , tblsp_nm
         , max(collection_time) as collection_time
         , max(space_used) keep (dense_rank last order by collection_time) as space_used
         , max(space_used) keep (dense_rank last order by collection_time)
           - max(space_used) keep (dense_rank first order by collection_time) as diff
    from tstg
    group by db_nm
           , tblsp_nm
    ;
    
  • Help with analytical functions

    Hi all

    I'm on Oracle 11g DB and have records in the table that look like this
    transaction_ref   line_type   description
    --------------------   --------------  ---------------
     10                   DETAIL      abc123          
     10                   DETAIL      abc978          
     10                   DETAIL      test              
     10                   DETAIL      test              
     10                   DETAIL      test              
     20                   DETAIL      abcy             
     20                   DETAIL      abc9782       
     20                   DETAIL      test12          
     20                   DETAIL      test32          
    Analytical, I generate rownumber by Ref single transaction as follows:
    SELECT row_number() over (partition by transaction_ref order by 1) rownumber
    FROM mytable ;
    
    
    transaction_ref   line_type   description   rownumber
    --------------------   --------------  ---------------   ----------------
     10                   DETAIL      abc123          1
     10                   DETAIL      abc978          2
     10                   DETAIL      test              3
     10                   DETAIL      test              4
     10                   DETAIL      test              5
     20                   DETAIL      abcy             1
     20                   DETAIL      abc9782       2
     20                   DETAIL      test12          3
     20                   DETAIL      test32          4
    However, for my needs, I need my rownumber as follows:

    with the exception of number 1 of Clotilde, I want to increment the number of lines per 3
     transaction_ref   line_type   description   rownumber
    --------------------   --------------  ---------------   ----------------
     10                   DETAIL      abc123          1
     10                   DETAIL      abc978          4
     10                   DETAIL      test              7
     10                   DETAIL      test              10
     10                   DETAIL      test              13
     20                   DETAIL      abcy             1
     20                   DETAIL      abc9782       4
     20                   DETAIL      test12          7
     20                   DETAIL      test32          10
    .... 
    Thank you
    Maëlle

    Published by: user565538 on June 4, 2011 17:32

    Published by: user565538 on June 4, 2011 17:34

    Published by: user565538 on June 4, 2011 17:35
    with mytable as (
                     select 10 transaction_ref,'DETAIL' line_type,'abc123' description from dual union all
                     select 10,'DETAIL','abc978' from dual union all
                     select 10,'DETAIL','test' from dual union all
                     select 10,'DETAIL','test' from dual union all
                     select 10,'DETAIL','test' from dual union all
                     select 20,'DETAIL','abcy' from dual union all
                     select 20,'DETAIL','abc9782' from dual union all
                     select 20,'DETAIL','test12' from dual union all
                     select 20,'DETAIL','test32' from dual
                    )
    SELECT  transaction_ref,
            line_type,
            description,
            (row_number() over (partition by transaction_ref order by 1) - 1) * 3 + 1 rownumber
    FROM mytable
    /
    
    TRANSACTION_REF LINE_T DESCRIP  ROWNUMBER
    --------------- ------ ------- ----------
                 10 DETAIL abc123           1
                 10 DETAIL abc978           4
                 10 DETAIL test             7
                 10 DETAIL test            10
                 10 DETAIL test            13
                 20 DETAIL abcy             1
                 20 DETAIL abc9782          4
                 20 DETAIL test12           7
                 20 DETAIL test32          10
    
    9 rows selected.
    
    SQL> 
    

    SY.

  • Help with analytical functions - Windowing

    Hello

    I'm using Oracle 11.2.0.4.0.

    I want to do the sum of all amounts for each window of 3 days from the date of the oldest rolling.  I also want to name each window with the date limit for the period of 3 days.

    My requirement is slightly more complicated, but I use this example to illustrate what I'm trying to

    create table test (dt date, amt, run_id number);

    Insert test values (to_date (' 22/04/2015 ',' dd/mm/yyyy'), 1, 1);

    Insert test values (to_date (' 23/04/2015 ',' dd/mm/yyyy'), 1, 1);

    Insert test values (to_date (' 24/04/2015 ',' dd/mm/yyyy'), 1, 1);

    Insert test values (to_date (' 25/04/2015 ',' dd/mm/yyyy'), 1, 1);

    Insert test values (to_date (' 27/04/2015 ',' dd/mm/yyyy'), 5, 1);

    Insert test values (to_date (' 28/04/2015 ',' dd/mm/yyyy'), 2, 1);

    Insert test values (to_date (' 29/04/2015 ',' dd/mm/yyyy'), 1, 1);

    Insert test values (to_date (' 04/30/2015 ',' dd/mm/yyyy'), 1, 1);

    Insert test values (to_date (' 01/05/2015 ',' dd/mm/yyyy'), 1, 1);

    Insert test values (to_date (' 02/05/2015 ',' dd/mm/yyyy'), 1, 1);

    Insert test values (to_date (' 03/05/2015 ',' dd/mm/yyyy'), 1, 1);

    Insert test values (to_date (' 04/05/2015 ',' dd/mm/yyyy'), 1, 1);

    The output should look like the example below.  The period column requires

    to show the end of each 3-day study:

    AMT DT SUM_PER_PERIOD PERIOD

    22/04/2015 1 1 24/04/2015

    23/04/2015 1 2 24/04/2015

    24/04/2015 1 3 24/04/2015

    25/04/2015 1 3 27/04/2015

    27/04/2015 5 6 27/04/2015

    28/04/2015 2 7 30/04/2015

    29/04/2015 20 27 30/04/2015

    30/04/2015 30 52 30/04/2015

    05/01/2015 5 55 3/05/2015

    05/02/2015 5 50 3/05/2015

    05/02/2015 10 50 3/05/2015

    05/03/2015 1 21/3/05/2015

    All I can manage this is

    Select dt

    TN

    , sum (amt) on sum_per_period (PARTITION BY run_id ORDER BY dt vary from 2 PAST current line)

    of the test

    order by dt;

    Can anyone help?

    It's very kind of you to give the insert and create instructions... but I corrected the data a bit

    It does not match the output see you below

    starting from 29/04, you forgot to change the dates and numbers of...

    insert into test values (to_date('22/04/2015','dd/mm/yyyy'),1,1);
    insert into test values (to_date('23/04/2015','dd/mm/yyyy'),1,1);
    insert into test values (to_date('24/04/2015','dd/mm/yyyy'),1,1);
    insert into test values (to_date('25/04/2015','dd/mm/yyyy'),1,1);
    insert into test values (to_date('27/04/2015','dd/mm/yyyy'),5,1);
    insert into test values (to_date('28/04/2015','dd/mm/yyyy'),2,1);
    insert into test values (to_date('29/04/2015','dd/mm/yyyy'),20,1);
    insert into test values (to_date('30/04/2015','dd/mm/yyyy'),30,1);
    insert into test values (to_date('01/05/2015','dd/mm/yyyy'),5,1);
    insert into test values (to_date('02/05/2015','dd/mm/yyyy'),5,1);
    insert into test values (to_date('02/05/2015','dd/mm/yyyy'),10,1);
    insert into test values (to_date('03/05/2015','dd/mm/yyyy'),1,1);
    

    your periods will change if you insert a new first date...

    so I guess you want a specific date... in this case 22/04/2015 and a specific end date

    creation of periods from this first date and then grouping of these periods is easier with a first fixed date and a delta of 3 days.

    the first step is to match the periods to your data (adapted)

    with periods as (
      select date_start + (level-1) * period_days period_start, date_start + level * period_days period_end, period_days from (
        select to_date('21/04/2015', 'dd/mm/yyyy') date_start, to_date('04/05/2015', 'dd/mm/yyyy') date_end, 3 period_days   from dual)
      connect by date_start + level * period_days  < date_end)
    select *
    from test t, periods p
    where t.dt > p.period_start and t.dt <= p.period_end
    

    This gives your data with the dates of beginning and ending period

    DT AMT RUN_ID PERIOD_START PERIOD_END PERIOD_DAYS
    22/04/2015

    1

    1

    21/04/2015 24/04/2015

    3

    23/04/2015

    1

    1

    21/04/2015 24/04/2015

    3

    24/04/2015

    1

    1

    21/04/2015 24/04/2015

    3

    25/04/2015

    1

    1

    24/04/2015 27/04/2015

    3

    27/04/2015

    5

    1

    24/04/2015 27/04/2015

    3

    28/04/2015

    2

    1

    27/04/2015 30/04/2015

    3

    29/04/2015

    20

    1

    27/04/2015 30/04/2015

    3

    30/04/2015

    30

    1

    27/04/2015 30/04/2015

    3

    05/01/2015

    5

    1

    30/04/2015 05/03/2015

    3

    05/02/2015

    5

    1

    30/04/2015 05/03/2015

    3

    05/02/2015

    10

    1

    30/04/2015 05/03/2015

    3

    05/03/2015

    1

    1

    30/04/2015 05/03/2015

    3

    and then sum the amt during the 3 days

    with periods as (
      select date_start + (level-1) * period_days period_start, date_start + level * period_days period_end, period_days from (
        select to_date('21/04/2015', 'dd/mm/yyyy') date_start, to_date('04/05/2015', 'dd/mm/yyyy') date_end, 3 period_days   from dual)
      connect by date_start + level * period_days  < date_end)
    select t.dt, t.amt, sum(amt) over (order by t.dt range between 2 preceding and current row) sum_per_period, p.period_end period
    from test t, periods p
    where t.dt > p.period_start and t.dt <= p.period_end
    

    giving your output as requested:

    DT AMT SUM_PER_PERIOD PERIOD
    22/04/2015

    1

    1

    24/04/2015
    23/04/2015

    1

    2

    24/04/2015
    24/04/2015

    1

    3

    24/04/2015
    25/04/2015

    1

    3

    27/04/2015
    27/04/2015

    5

    6

    27/04/2015
    28/04/2015

    2

    7

    30/04/2015
    29/04/2015

    20

    27

    30/04/2015
    30/04/2015

    30

    52

    30/04/2015
    05/01/2015

    5

    55

    05/03/2015
    05/02/2015

    5

    50

    05/03/2015
    05/02/2015

    10

    50

    05/03/2015
    05/03/2015

    1

    21

    05/03/2015
  • With the help of analytical functions above and follow

    Hello

    Assume, I the date as follows:
    customerid    orderid                      orderdate    
    -------------    ----------                    --------------
    xyz                       1                       01/10/2010
    xyz                       2                       02/11/2010
    xyz                       3                       03/12/2011
    xyz                       4                       03/01/2011
    xyz                       5                       03/02/2011
    xyz                       6                       03/03/2011
    abc                       7                       10/09/2010
    abc                       8                       10/10/2010
    abc                       9                       10/11/2010
    abc                       10                     10/01/2011
    abc                       11                     10/02/2011
    abc                       12                     10/03/2011
    Now I want to gerenate a report based on the above data as below:

    CustomerID, number of orders placed in the last 30 days of the new year (01/01/2011), no orders placed with 60 during the last days of the new year, no.. orders placed in the last 90 days of the new year, no orders placed within 30 days of the new year, no.. orders placed within 60 days of the new year, no.. orders placed within 90 days of the new year


    I am trying to do this using the following code, but could not succeed:
        select c.customerid,
        count(*) over (partition by c.customerid order by c.orderdate RANGE interval '30' DAY PRECEDING) as "Last 1 month",
        count(*) over (partition by c.customerid order by c.orderdate RANGE interval '60' DAY PRECEDING) as "Last 2 months",
        count(*) over (partition by c.customerid order by c.orderdate RANGE interval '90' DAY PRECEDING) as "Last 3 months",
        count(*) over (partition by c.customerid order by c.orderdate RANGE interval '30' DAY FOLLOWING) as "Following 1 month",
        count(*) over (partition by c.customerid order by c.orderdate RANGE interval '60' DAY FOLLOWING) as "Following 2 months",
        count(*) over (partition by c.customerid order by c.orderdate RANGE interval '90' DAY FOLLOWING) as "Following 3 months"
        from customer_orders c where orderdate < to_date('01/01/2011','dd/mm/yyyy')
    Kindly help. Thanks in advance.

    Published by: 858747 on May 13, 2011 03:40

    Published by: BluShadow on May 13, 2011 11:57
    addition of {noformat}
    {noformat} tags to retain formatting.  Please read: {message:id=9360002}                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                

  • Need help with analytical function (LAG)

    The requirement is as I have a table with described colums

    col1 County flag Flag2

    ABC 1 Y Y

    XYZ 1 Y Y

    XYZ 1 O NULL

    xyz *2* N N

    XYZ 2 Y NULL

    DEF 1 Y Y

    DEF 1 N NULL

    To get the columns Flag2

    1 assign falg2 as indicator for rownum = 1
    2 check the colm1, count of current line with colm1, Earl of the previous line. The colm1 and the NTC are identical, should assign null...


    Here's the query I used to get the values of Flag2


    SELECT colm1, count, flag
    BOX WHEN
    LAG(Count, 1,null) OVER (PARTITION BY colm1 ORDER BY colm1 DESC NULLS LAST) IS NULL
    and LAG(flag, 1, NULL) PLUS (SCORE FROM colm1 ORDER BY colm1, cycle DESC NULLS LAST) IS NULL
    THEN the flag
    END AS Flag2
    FROM table1


    but the query above returns the o/p below which is false

    col1_ County flag Flag2

    ABC 1 Y Y
    XYZ 1 Y Y
    XYZ 1 O NULL
    xyz *2* N NULL
    XYZ 2 Y NULL
    DEF 1 Y Y
    DEF 1 N NULL


    Thank you

    Published by: user9370033 on April 8, 2010 23:25

    Well, you have not enough explained your full requirement in this

    1 assign falg2 as indicator for rownum = 1
    2 check the colm1, count of current line with colm1, Earl of the previous line. The colm1 and the NTC are identical, should assign null...

    as you say not what Flag2 must be set on if com1 and cnt are not the same as the previous row.

    But how about this as my first guess what you mean...

    SQL> with t as (select 'abc' as col1, 1 as cnt, 'Y' as flag from dual union all
      2             select 'xyz', 1, 'Y' from dual union all
      3             select 'xyz', 1, 'Y' from dual union all
      4             select 'xyz', 2, 'N' from dual union all
      5             select 'xyz', 2, 'Y' from dual union all
      6             select 'def', 1, 'Y' from dual union all
      7             select 'def', 1, 'N' from dual)
      8  -- END OF TEST DATA
      9  select col1, cnt, flag
     10        ,case when lag(col1) over (order by col1, cnt) is null then flag
     11              when lag(col1) over (order by col1, cnt) = col1 and
     12                   lag(cnt) over (order by col1, cnt) = cnt then null
     13              else flag
     14         end as flag2
     15  from t
     16  /
    
    COL        CNT F F
    --- ---------- - -
    abc          1 Y Y
    def          1 Y Y
    def          1 N
    xyz          1 Y Y
    xyz          1 Y
    xyz          2 Y Y
    xyz          2 N
    
    7 rows selected.
    
    SQL>
    
  • Help the analytical functions

    I have a requirement to get the records of previous values, I used the LEAD feature. The final result should return only rows that have multiple records... If its only one record then that is not part of the result.
    --Table Creation
    CREATE TABLE EHR(  ID      NUMBER,  NAMEID  VARCHAR2(100),  SAL     NUMBER,  DT      DATE);
    
    -- Inserting the data
    Insert into ehr  (ID, NAMEID, SAL, DT)
     Values  (1, 'A', 100, TO_DATE('10/08/2012 12:35:10', 'MM/DD/YYYY HH24:MI:SS'));
    Insert into ehr    (ID, NAMEID, SAL, DT)
     Values    (1, 'A', 200, TO_DATE('10/09/2012 12:35:16', 'MM/DD/YYYY HH24:MI:SS'));
    Insert into ehr    (ID, NAMEID, SAL, DT)
     Values    (1, 'A', 300, TO_DATE('10/10/2012 12:35:21', 'MM/DD/YYYY HH24:MI:SS'));
    Insert into ehr    (ID, NAMEID, SAL, DT)
     Values    (2, 'B', 100, TO_DATE('10/08/2012 12:35:10', 'MM/DD/YYYY HH24:MI:SS'));
    Insert into ehr    (ID, NAMEID, SAL, DT)
     Values    (2, 'B', 200, TO_DATE('10/09/2012 12:35:16', 'MM/DD/YYYY HH24:MI:SS'));
    Insert into ehr    (ID, NAMEID, SAL, DT)
     Values    (2, 'B', 300, TO_DATE('10/10/2012 12:35:21', 'MM/DD/YYYY HH24:MI:SS'));
    Insert into ehr    (ID, NAMEID, SAL, DT)
     Values    (3, 'C', 100, TO_DATE('10/17/2012 09:14:35', 'MM/DD/YYYY HH24:MI:SS'));
    COMMIT;
    
    -- Query
    
    select id,nameid,  sal, dt, LEAD(sal,1,0) over ( partition by id order by id )as sal_prev,
    LEAD(dt) over ( partition by id order by id ) as dt_prev
    from ehr
    Order by id , DT desc
    
    -- Result
    ID     NAMEID     SAL     DT                      SAL_PREV     DT_PREV
    1     A     300     10/10/2012 12:35:21 PM     200     10/9/2012 12:35:16 PM
    1     A     200     10/9/2012 12:35:16 PM     100     10/8/2012 12:35:10 PM
    1     A     100     10/8/2012 12:35:10 PM     0     
    2     B     300     10/10/2012 12:35:21 PM     200     10/9/2012 12:35:16 PM
    2     B     200     10/9/2012 12:35:16 PM     100     10/8/2012 12:35:10 PM
    2     B     100     10/8/2012 12:35:10 PM     0     
    3     C     100     10/17/2012 9:14:35 AM     0     
    I don't want the ID 3 because it contains only one record of the result above... rest should come. Any ideas?

    I think you want to delete the order of numbering and change the select * for columns that you need, if not, it seems, why not test on the sample table you provided?

    Edited by: bencol on 17 October 2012 15:05

    Edited by: bencol on 17 October 2012 15:08

  • Help with analytic function

    version 9.2

    Here is a sample
    WITH temp AS
         (SELECT 10 ID, TRUNC (SYSDATE - 1) dt, 101 ord_id
            FROM DUAL
          UNION
          SELECT 11 ID, TRUNC (SYSDATE - 1) dt, 101 ord_id
            FROM DUAL
          UNION
          SELECT 11 ID, TRUNC (SYSDATE) dt, 103 ord_id
            FROM DUAL
          UNION
          SELECT 13 ID, TRUNC (SYSDATE) dt, 104 ord_id
            FROM DUAL)
    SELECT *
      FROM temp
    Output: number of separate orders for each date
    Dt     Count
    1/25  1
    1/26  2
    ME_XE?WITH temp AS  2       (SELECT 10 ID, TRUNC (SYSDATE - 1) dt, 101 ord_id  3          FROM DUAL  4        UNION  5        SELECT 11 ID, TRUNC (SYSDATE - 1) dt, 101 ord_id  6          FROM DUAL  7        UNION  8        SELECT 11 ID, TRUNC (SYSDATE) dt, 103 ord_id  9          FROM DUAL 10        UNION 11        SELECT 13 ID, TRUNC (SYSDATE) dt, 104 ord_id 12          FROM DUAL) 13  SELECT dt, count(distinct ord_id) 14  FROM temp 15  group by dt;
    
    DT                         COUNT(DISTINCTORD_ID)-------------------------- ---------------------25-JAN-2009 12 00:00                           126-JAN-2009 12 00:00                           2
    
    2 rows selected.
    
    Elapsed: 00:00:00.01ME_XE?ME_XE?
    
  • confusion with the analytical functions

    I created an example where I am right now with the help of analytical functions. However, I need the query below to return an additional column. I need to return the result from:-' factor_day_sales * max (sdus)'. Any ideas?

    If the first column is located and must have the following results

    777777, 5791, 10, 1.5, 15, 90, 135, 7050

    the 1350 is the result, I don't know how to do. (some how to multiply factored_day_sales max (sdus) 15 470 = 7050
    create table david_sales (
    pro_id number(38),
    salesidx number (38,6),
    tim_id number(38));
    
    truncate table david_sales
    
    create table david_compensations (
    pro_id number(38),
    tim_id number(38),
    factor number(38,6));
    
    
    insert into david_sales values
    (777777, 10.00, 5795);
    insert into david_sales values
    (777777,20.00, 5795);
    insert into david_sales values
    (777777, 30.00, 5794);
    insert into david_sales values
    (777777, 40.00, 5794);
    insert into david_sales values
    (777777, 100.00, 5793);
    insert into david_sales values
    (777777, 10.00, 5793);
    insert into david_sales values
    (777777,80.00, 5791);
    insert into david_sales values
    (777777, 10.00, 5791);
    
    insert into david_compensations values
    (777777, 5795, 1.5);
    insert into david_compensations values
    (777777, 5793, 2.0);
    insert into david_compensations values
    (777777, 5792, 1.0);
    insert into david_compensations values
    (777777, 5791, 1.5);
    
    
    
        SELECT  s.pro_id sales_pro
        ,       c.pro_id comp_pro
        ,       s.tim_id sales_tim
        ,       c.tim_id comp_tim
        ,       s.salesidx day_sales
        ,       NVL(c.factor, 1) factor
        ,       s.salesidx * NVL(c.factor, 1) factored_day_sales
        ,       sum(s.salesidx                   ) over (partition by s.pro_id order by s.pro_id, s.tim_id) Sdus
        ,       sum(s.salesidx * NVL(c.factor, 1)) over (partition by s.pro_id order by s.pro_id, s.tim_id) sumMjCj 
          FROM david_sales s
          ,    david_compensations c
          WHERE s.pro_id    = c.pro_id(+)
          AND s.tim_id      = c.tim_id(+)
          AND s.tim_id     BETWEEN 5791  AND 5795
    Thanks for looking

    Is that what you want?

        SELECT  s.pro_id sales_pro
        ,       c.pro_id comp_pro
        ,       s.tim_id sales_tim
        ,       c.tim_id comp_tim
        ,       s.salesidx day_sales
        ,       NVL(c.factor, 1) factor
        ,       s.salesidx * NVL(c.factor, 1) factored_day_sales
        ,       sum(s.salesidx                   ) over (partition by s.pro_id order by s.pro_id, s.tim_id) Sdus
        ,       sum(s.salesidx * NVL(c.factor, 1)) over (partition by s.pro_id order by s.pro_id, s.tim_id) sumMjCj
        , (s.salesidx * NVL(c.factor, 1) * sum(s.salesidx                   ) over (partition by s.pro_id order by s.pro_id, s.tim_id))
          FROM david_sales s
          ,    david_compensations c
          WHERE s.pro_id    = c.pro_id(+)
          AND s.tim_id      = c.tim_id(+)
          AND s.tim_id     BETWEEN 5791  AND 5795
    
    SALES_PRO              COMP_PRO               SALES_TIM              COMP_TIM               DAY_SALES              FACTOR                 FACTORED_DAY_SALES     SDUS                   SUMMJCJ                SUMMEDMULTI
    ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ----------------------
    777777                 777777                 5791                   5791                   80                     1.5                    120                    90                     135                    10800
    777777                 777777                 5791                   5791                   10                     1.5                    15                     90                     135                    1350  
    

    I get the 1350

    or did you mean:

        SELECT  s.pro_id sales_pro
        ,       c.pro_id comp_pro
        ,       s.tim_id sales_tim
        ,       c.tim_id comp_tim
        ,       s.salesidx day_sales
        ,       NVL(c.factor, 1) factor
        ,       s.salesidx * NVL(c.factor, 1) factored_day_sales
        ,       sum(s.salesidx                   ) over (partition by s.pro_id order by s.pro_id, s.tim_id) Sdus
        ,       sum(s.salesidx * NVL(c.factor, 1)) over (partition by s.pro_id order by s.pro_id, s.tim_id) sumMjCj
        ,  s.salesidx * NVL(c.factor, 1) * (sum(s.salesidx * NVL(c.factor, 1)) over (partition by s.pro_id order by s.pro_id, s.tim_id)) summedMulti
          FROM david_sales s
          ,    david_compensations c
          WHERE s.pro_id    = c.pro_id(+)
          AND s.tim_id      = c.tim_id(+)
          AND s.tim_id     BETWEEN 5791  AND 5795 
    
    SALES_PRO              COMP_PRO               SALES_TIM              COMP_TIM               DAY_SALES              FACTOR                 FACTORED_DAY_SALES     SDUS                   SUMMJCJ                SUMMEDMULTI
    777777                 777777                 5795                   5795                   10                     1.5                    15                     300                    470                    7050
    

    Note, in the second block, I changed it just to use sumMjCj instead of sDus which seems to correlate with what you wanted (15 * 470 = 7050) while sdus is 15 * 300 = 4500

    Published by: tanging on December 11, 2009 06:17

  • SQL using the analytic function


    Hi all

    I want a help in the creation of my SQL query to retrieve the data described below:

    I have a test of sample table containing data as below:

    State ID Desc

    MICHAEL 1 T1

    ACTIVE 2 T2

    T3 3 SUCCESS

    DISABLE THE T4 4

    The thing I want to do is to select all the lines with an ACTIVE status in the table but is there is no ACTIVE status, my request will give me the last line with MICHAEL status.

    I can do this in a single request by using the analytical function for example, if yes can yiu help me on the request of unpacking.

    Kind regards

    Raluce

    Something like that?

    I had to fix it.

    with testdata until)
    Select 1 id, "T1" dsc "DISABLED" status of Union double all the
    Select 2 id, 'T2' dsc, the status "ACTIVE" of all the double union
    Select id 3, "T3" dsc, the status of 'SUCCESS' of all the double union
    Select 4 id, "T4" dsc "DISABLED" status of double
    )

    Select
    ID
    dsc
    status
    of testdata
    where
    status =
    -case when (select count (*) in testdata where status = 'ACTIVE') > 0
    then 'ACTIVE '.
    Another 'DISABLED '.
    end
    and)
    ID in (select id from testdata where status = ' ACTIVE')
    or
    ID = (select max (id) in testdata when status = 'DISABLED')
    )

    STATE ID DSC

    '2' 'T2' 'ACTIVE '.

    Maybe it's more efficient

    Select
    ID
    dsc
    status
    of testdata
    where
    status =
    -case when (select count (*) in testdata where status = 'ACTIVE') > 0
    then 'ACTIVE '.
    Another 'DISABLED '.
    end
    and
    ID =)
    -case when (select count (*) in testdata where status = 'ACTIVE') > 0
    then id
    on the other
    (select max (id) in testdata when status = 'DISABLED')
    end
    )

    Post edited by: correction of chris227

    Post edited by: chris227
    extended

  • Need help to resolve the query by using analytic functions

    Hello

    I need help to solve this problem, I tried an analytical function but could not solve the problem.

    I have three table as illustrated below the table is filled with a flat file. The records are arranged sequentailly based on the name of the file.

    The first record of the game based on EIN goes to TAB_RCE
    the following records then goes to TAB_RCW
    and last save of the game based on EIN goes to the RCT table

    How can I make groups and
    assign a

    EIN * 12345 * line number * 02, 03, 04 * in the table TAB_RCW and * 05 * in the table TAB_RCT
    EIN * 67890 * line number * 07, 08, 09,10 * in the table TAB_RCW and * 11 * in the table TAB_RCT
    and so on...

    Thank you

    Rajesh

    TAB RCE_--------------------------------------------------------------
    LineNumber EIN FILENAME TYPE

    -----

    01 12345 ABC NCE. TXT
    06 67890 ABC NCE. TXT
    12 76777 ABC NCE. TXT

    -----
    TAB_RCW
    -----
    LineNumber TYPE SSN FILENAME
    -----
    02 22222 ABC RCW. TXT
    03 33333 ABC RCW. TXT
    04 44444 ABC RCW. TXT
    07 55555 ABC RCW. TXT
    08 66666 ABC RCW. TXT
    09 77777 ABC RCW. TXT
    10 88888 ABC RCW. TXT
    13 99998 ABC RCW. TXT
    14 99999 ABC RCW. TXT

    -----
    TAB_RCT
    -----
    NAME OF THE FILE OF TYPE LINENUMBER
    -----
    RCT 05 ABC. TXT
    RCT 11 ABC. TXT
    RCT 15 ABC. TXT
    -----
    SQL> with TAB_RCE as (
      2                   select 'RCE' rtype,'01' linenumber, '12345' EIN,'ABC.TXT' FILENAME from dual union all
      3                   select 'RCE','06','67890','ABC.TXT' from dual union all
      4                   select 'RCE','12','76777','ABC.TXT' from dual
      5                  ),
      6       TAB_RCW as (
      7                   select 'RCW' rtype,'02' linenumber,'22222' ssn,'ABC.TXT' FILENAME from dual union all
      8                   select 'RCW','03','33333','ABC.TXT' from dual union all
      9                   select 'RCW','04','44444','ABC.TXT' from dual union all
     10                   select 'RCW','07','55555','ABC.TXT' from dual union all
     11                   select 'RCW','08','66666','ABC.TXT' from dual union all
     12                   select 'RCW','09','77777','ABC.TXT' from dual union all
     13                   select 'RCW','10','88888','ABC.TXT' from dual union all
     14                   select 'RCW','13','99998','ABC.TXT' from dual union all
     15                   select 'RCW','14','99999','ABC.TXT' from dual
     16                  ),
     17       TAB_RCT as (
     18                   select 'RCT' rtype,'05' linenumber,'ABC.TXT' FILENAME from dual union all
     19                   select 'RCT','11','ABC.TXT' from dual union all
     20                   select 'RCT','15','ABC.TXT' from dual
     21                  )
     22  select  rtype,
     23          last_value(ein ignore nulls) over(partition by filename order by linenumber) ein,
     24          linenumber,
     25          ssn
     26    from  (
     27            select  rtype,
     28                    linenumber,
     29                    ein,
     30                    to_char(null) ssn,
     31                    filename
     32              from  TAB_RCE
     33            union all
     34            select  rtype,
     35                    linenumber,
     36                    to_char(null) ein,
     37                    ssn,
     38                    filename
     39              from  TAB_RCW
     40            union all
     41            select  rtype,
     42                    linenumber,
     43                    to_char(null) ein,
     44                    to_char(null) ssn,
     45                    filename
     46              from  TAB_RCt
     47          )
     48    order by linenumber
     49  /
    
    RTY EIN   LI SSN
    --- ----- -- -----
    RCE 12345 01
    RCW 12345 02 22222
    RCW 12345 03 33333
    RCW 12345 04 44444
    RCT 12345 05
    RCE 67890 06
    RCW 67890 07 55555
    RCW 67890 08 66666
    RCW 67890 09 77777
    RCW 67890 10 88888
    RCT 67890 11
    
    RTY EIN   LI SSN
    --- ----- -- -----
    RCE 76777 12
    RCW 76777 13 99998
    RCW 76777 14 99999
    RCT 76777 15
    
    15 rows selected.
    
    SQL> 
    

    SY.

  • Help SQL function

    Hi all

    I have a table which includes

    ID, first_name, last name and date of birth

    I want to retrieve all the records that have the same name and DOB

    for example.

    ID, first_name, last name, DOB

    1 xyz abc 01/01/2012

    hij 2 efg 15/05/2012

    3 xyz abc 01/01/2012

    4 xyz abc 01/01/2012

    so in the output of 1, 3 and 4 rows will appear only.

    can someone advise please on the appropriate SQL function

    Thanks in advance

    Hello

    One way is to use the analytical COUNT function:

    WITH got_num_rows AS

    (

    SELECT id, first_name, last_name, dob

    , COUNT (*) OVER (PARTITION BY first name

    last_name

    ,             dob

    ) AS num_rows

    FROM table_x

    )

    SELECT id, first_name, last_name, dob

    OF got_num_rows

    WHERE num_rows > 1

    ;

    The subquery is needed here, because the analytical functions are calculated after the WHERE clause has been applied.  To use the results of an anlytic function in a WHERE clause, you must calculate the function in a subquery, then you can use the results where you want (including the WHERE clause) of a Super query.

Maybe you are looking for

  • Satellite A200-1UV - 1920 x 1080 / 16:9 format on TV Full HD?

    Hello I have a laptop Toshiba Satellite A200-1UV with Mobile Intel (r) 965 Express Chipset Family I have connected this Toshiba with my Samsung LED TV Full HD where the display is set to 16:9 The TV and the laptop is connected with a jackstick and a

  • Motorcycle Marketing: Are you kidding me?

    Because I can't post in the normal forum I have to express here. Rant: marketing of the bike has a CARGO of nerve to send to the Cliq XT users an offer of upgrade for Cliq 2 for $99. After the debacle that was this process of "2.1 updated day" (Yes,

  • 2 GB Clip watch songs installed, but cannot access

    Bought my wife a 2 GB Clip when she makes the workouts.  My son ripped and transferred songs and they show installed but when she goes to the access that they are no where.  My son had his Clip GB 4 for quite awhile and has no problem whatsoever.  Wh

  • Problem Timeout of the weird HTTP connection on the BES!

    Nice day I seem to be lost on a TCP connection problem. Basically, I opened 'a' connection via TCP, and after a period of 120 seconds of inactivity, it is cleaned by the BES. Now, I've read a resource that provides information about a parameter, the

  • Mac OSX VPN Client 4.9

    I could not connect using Mac OSX VPN Client 4.9. The "DEL_REASON_PEER_NOT_RESPONDING" message continues to appear. The log is attached.