Truncate output of analytical function?

For example this query:

Select month, sum (tot_sales) monthly_sales,.

AVG (Sum (tot_sales)) (any order by month

between 1 above and 1 below) rolling_avg

orders

where year = 2001 and region_id = 6

Group by month;

gives me an output which includes several decimal places for the rolling_avg column.

Is there a way to truncate this? I tried to use the rounded outside the analytical function and surely enough, it didn't work. I can't think otherwise.

You can use an external selection on the result of this query

select trunc(rolling_avg) from
( rolling_avg query);

Tags: Database

Similar Questions

  • by using the analytical function to get the right output.

    Hello all;

    I have the following date of sample below
    create table temp_one
    (
           id number(30),   
          placeid varchar2(400),
          issuedate  date,
          person varchar2(400),
          failures number(30),
          primary key(id)
    );
    
    insert into temp_one values (1, 'NY', to_date('03/04/2011', 'MM/DD/YYYY'), 'John', 3);
    
    insert into temp_one values (2, 'NY', to_date('03/03/2011', 'MM/DD/YYYY'), 'Adam', 7);
    
    insert into temp_one values (3, 'Mexico', to_date('03/04/2011', 'MM/DD/YYYY'), 'Wendy', 3);
    
    insert into temp_one values (4, 'Mexico', to_date('03/14/2011', 'MM/DD/YYYY'), 'Gerry', 3);
    
    insert into temp_one values (5, 'Mexico', to_date('03/15/2011', 'MM/DD/YYYY'), 'Zick', 9);
    
    insert into temp_one values (6, 'London', to_date('03/16/2011', 'MM/DD/YYYY'), 'Mike', 8);
    It's the output I want
    placeid       issueperiod                               failures
    NY              02/28/2011 - 03/06/2011          10
    Mexico       02/28/2011 - 03/06/2011           3
    Mexico        03/14/2011 - 03/20/2011          12
    London        03/14/2011 - 03/20/2011          8
    Any help is appreciated. I'll post my request as soon as I can think of a good logic for this...

    Hello

    user13328581 wrote:
    ... Please note, I'm still learning how to use analytical functions.

    It doesn't matter; analytical functions will not help in this problem. The SUM aggregate function is all you need.
    But what do you need to GROUP BY? What is the value of each row of the result will represent? A placeid? Yes, each line will represent only placedid, but it will be divided further. You want a separate line of the output for each placeid and every week, then you'll want of the week and GROUP BY placeid. You don't want to GROUP BY the raw issuedate; that would put on 3 March and 4 March in separate groups. And you don't want to GROUP BY failures; This would mean that a line with 3 failures could never be in the same group in line with 9 failures.

    This becomes the output you posted from the sample data you posted:

    SELECT       placeid
    ,             TO_CHAR ( TRUNC (issuedate, 'IW')
                  , 'MM/DD/YYYY'
                ) || ' - '|| TO_CHAR ( TRUNC (issuedate, 'IW') + 6
                                             , 'MM/DD/YYY'
                               )     AS issueperiod
    ,       SUM (failures)                  AS sumfailures
    FROM        temp_one
    GROUP BY  placeid
    ,            TRUNC (issuedate, 'IW')
    ;
    

    You can use a subquery to calculate TRUNC (issuedate, 'IW') once. The code would be of about as complicated, efficiency probably will not improve substantially and the results would be the same.

  • Cannot use analytical functions such as lag/lead in odi components 12 c except in the expression

    Hi I am a beginner of ODI 12 c

    I'm trying to get the last two comments made on the product for a given product id. and load them into a target.

    I have a source table something like

    Product SR_NO comments LAST_UPDATED_TS

    1 good car 2015/05/15 08:30:25

    1 car average 2015/05/15 10:30:25

    Jeep 2 super 2015/05/15 11:30:25

    1 car bad 2015/05/15 11:30:25

    Jeep 2 horrible 2015/05/15 09:30:25

    Jeep 2 excellent 2015/05/15 12:30:25


    I want a target table based on their last timestamp updated as (last two comments)


    SR_NO Comment1 Comment2

    1                             bad                      average

    2 super excellent

    I used the logic below to get records in SQL Developer but in ODI 12 c, I'm not able to do this by mapping a source to the target table by applying analytical functions to the columns in the target table. Can someone help me solve this problem

    SELECT * FROM)

    SELECT SR_NO Comment1, LAG(Comment1,1,) ON Comment2 (SR_NO ORDER BY LAST_UPDATED_TS ASC PARTITION),

    ROW_NUMBER() ON RN (SCORE FROM SR_NO ORDER BY LAST_UPDATED_TS DESC)

    FROM Source_table

    ) M

    WHERE RN = 1

    ;

    UM, I'm afraid that ODI puts the filter too early in the request, if it generates:

    SELECT * FROM)

    SELECT SR_NO Comment1, LAG(Comment1,1,) ON Comment2 (SR_NO ORDER BY LAST_UPDATED_TS ASC PARTITION),

    ROW_NUMBER() ON RN (SCORE FROM SR_NO ORDER BY LAST_UPDATED_TS DESC)

    FROM Source_table

    WHERE RN = 1

    ) M

    ;

    Instead of:

    SELECT * FROM)

    SELECT SR_NO Comment1, LAG(Comment1,1,) ON Comment2 (SR_NO ORDER BY LAST_UPDATED_TS ASC PARTITION),

    ROW_NUMBER() ON RN (SCORE FROM SR_NO ORDER BY LAST_UPDATED_TS DESC)

    FROM Source_table

    ) M

    WHERE RN = 1

    ;

    Even by changing the 'run on Hint"of your component of the expression to get there on the source, the request will stay the same.

    I think the easiest solution for you is to put everything before the filter in a reusable mapping with a signature of output. Then drag this reusable in your mapping as the new source and check the box "subselect enabled."

    Your final mapping should look like this:

    It will be useful.

    Kind regards

    JeromeFr

  • Using the analytic function

    Oracle 11g Release 2

    I'm assuming that the best solution is the use of analytical functions.

    create table test3
    ( part_type_id  varchar2(50)
    ,group_id      number
    ,part_desc_id  number
    ,part_cmt      varchar2(50)
    )
    /
    
    insert into test3 values( 'ABC123',1,10,'comment1');
    insert into test3 values( 'ABC123',1,10,'comment2');
    insert into test3 values( 'ABC123',2,15,'comment1');
    insert into test3 values( 'ABC123',2,15,'comment2');
    insert into test3 values( 'EFG123',25,75,'comment3');
    insert into test3 values( 'EFG123',25,75,'comment4');
    insert into test3 values( 'EFG123',25,75,'comment5');
    insert into test3 values( 'XYZ123',1,10,'comment6');
    insert into test3 values( 'XYZ123',2,15,'comment7');
    commit;
    
    select * from test3;
    
    PART_TYPE_ID           GROUP_ID PART_DESC_ID PART_CMT
    -------------------- ---------- ------------ --------------------
    ABC123                        1           10 comment1
    ABC123                        1           10 comment2
    ABC123                        2           15 comment1
    ABC123                        2           15 comment2
    EDG123                        25          75 comment3
    EDG123                        25          75 comment4
    EDG123                        25          75 comment5
    XYZ123                        1           10 comment6
    XYZ123                        2           15 comment7
    
    9 rows selected.
    
    Desired output:
    
    PART_TYPE_ID           GROUP_ID PART_DESC_ID PART_CMT
    -------------------- ---------- ------------ --------------------
    ABC123                        1           10 comment1 
    ABC123                        2           15 comment1
    XYZ123                        1           10 comment1
    XYZ123                        2           15 comment2
    
    RULE: where one part_type_id has multiple (2 or more distinct combinations) of group_id/part_desc_id
    
    NOTE: There are about 12 columns in the table, for brevity I only included 4.
    
    
    
    

    Post edited by: orclrunner was updated desired output and rule

    Hello

    Here's one way:

    WITH got_d_count AS

    (

    SELECT part_type_id, group_id, part_desc_id

    MIN (part_cmt) AS min_part_cmt

    COUNT AS d_count (*) OVER (PARTITION BY part_type_id)

    OF test3

    GROUP BY part_type_id, group_id, part_desc_id

    )

    SELECT DISTINCT

    group_id, part_desc_id, part_type_id, min_part_cmt

    OF got_d_count

    WHERE d_count > 1

    ;

    Output:

    GROUP_ID PART_DESC_ID MIN_PART_CMT PART_TYPE_ID

    ------------ ---------- ------------ ------------

    ABC123 1 10 comment1

    ABC123 2 15 comment1

    XYZ123 1 10 comment6

    XYZ123 2 15 comment7

    Analytical functions, such as the COUNTY and MIN, many global versions, in addition, it can give the same results.  Use the analytical versions when each row of output corresponds to exactly 1 row of input and the aggregate and GROUP BY version when each line of output corresponds to a group of lines 1 or more input.  In this issue, each line of output appears to be a group of input lines having the same group_id, part_type_id, and part_desc_id (I'm guessing just, this only has never stated), so I used GROUP BY to get 1 row of output for every input lines.

  • Nth salary using the analytic function

    I use under function to calculate second highest with empno and deptno salary.

    Is it possible to get the same result with another query without using Assembly only analytical functions condition.using and windows function is possible to get the desired output?

    SELECT e.empno,

    e.DEPTNO,

    tmp. SAL as second_higher_salary

    FROM emp e,.

    (SELECT Empno,

    DEPTNO,

    SAL,

    DENSE_RANK() (PARTITION BY deptno ORDER of sal) AS rnk

    WCP

    ) tmp

    WHERE tmp.deptno = e.deptno

    and tmp.rnk = 2

    EMPNO DEPTNO SAL

    ---------- ---------- ----------

    7934 10 2450

    7782 10 2450

    7839 10 2450

    7876 20 1100

    7369 20 1100

    7902 20 1100

    7788 20 1100

    7566 20 1100

    7900 30 1250

    7844 30 1250

    7654 30 1250

    7521 30 1250

    7499 30 1250

    7698 30 1250

    7900 30 1250

    7844 30 1250

    7654 30 1250

    7521 30 1250

    7499 30 1250

    7698 30 1250

    Here's my solution:

    Select empno,

    DEPTNO,

    FIRST_VALUE (sal) (PARTITION BY deptno ORDER by sal desc)

    de)

    SELECT EmpNo,

    DEPTNO,

    Decode (DENSE_RANK () OVER (PARTITION BY deptno order by sal desc), 1,-sal, sal) sal

    WCP

    )

    /

    EMPNO DEPTNO FIRST_VALUE (SAL) OVER (PARTITIONBYDEPTNOORDERBYSALDESC)

    ---------- ---------- -----------------------------------------------------

      7782 10 2450
      7934 10 2450
      7839 10 2450
      7566 20 2975
      7876 20 2975
      7369 20 2975
      7788 20 2975
      7902 20 2975
      7499 30 1600
      7844 30 1600
      7654 30 1600
      7521 30 1600
      7900 30 1600
      7698 30 1600
  • Why the different values for an analytic function of the same group/game

    I have the suite of table I'll be using.

    Select * from table1;

    REC_ID | STATUS | DATE_FROM | DATE_TO

    1. C | 7 January 2015 |

    2. H | December 3, 2014. 6 January 2015

    3. H | October 3, 2014. December 2, 2014

    4. H | May 30, 2014. October 2, 2014

    5. H | May 29, 2014 | May 29, 2014

    6. H | April 16, 2014 | May 28, 2014

    7. H | Tuesday, April 25, 2007 April 15, 2014

    INSERT statement if you need.

    TOGETHER TO DEFINE

    CREATE THE TABLE1 TABLE:

    (

    NUMBER OF REC_ID,

    VARCHAR2 (1 BYTE) STATUS NOT NULL,.

    DATE_FROM DATE NOT NULL,

    DATE OF DATE_TO

    );

    Insert into TABLE1

    (REC_ID, STATUS, DATE_FROM)

    Values

    (1, 'C', TO_DATE (7 JANUARY 2015 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'));))

    Insert into TABLE1

    (REC_ID, STATUS, DATE_FROM, DATE_TO)

    Values

    (2, 'H', TO_DATE (3 DECEMBER 2014 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'), TO_DATE (6 JANUARY 2015 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'));))

    Insert into TABLE1

    (REC_ID, STATUS, DATE_FROM, DATE_TO)

    Values

    (3, 'H', TO_DATE (3 OCTOBER 2014 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'), TO_DATE (2 DECEMBER 2014 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'));))

    Insert into TABLE1

    (REC_ID, STATUS, DATE_FROM, DATE_TO)

    Values

    (4, 'H', TO_DATE (MAY 30, 2014 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'), TO_DATE (2 OCTOBER 2014 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'));))

    Insert into TABLE1

    (REC_ID, STATUS, DATE_FROM, DATE_TO)

    Values

    (5, 'H', TO_DATE (29 MAY 2014 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'), TO_DATE (29 MAY 2014 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'));))

    Insert into TABLE1

    (REC_ID, STATUS, DATE_FROM, DATE_TO)

    Values

    (6, 'H', TO_DATE (APRIL 16, 2014 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'), TO_DATE (28 MAY 2014 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'));))

    Insert into TABLE1

    (REC_ID, STATUS, DATE_FROM, DATE_TO)

    Values

    (7, 'H', TO_DATE (APRIL 25, 2007 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'), TO_DATE (APRIL 15, 2014 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'));))

    COMMIT;

    I will exercise more analytical query...

    Select rec_id date_from, date_to, status,

    min (date_from) over (partition by order of status by date_from desc) min_dt_from_grp,

    ROW_NUMBER() over (partition by order of status by date_from desc) rownumberdesc,

    ROW_NUMBER() over (partition by order of status by ASC date_from) rownumberasc

    FROM table1;

    the query result

    REC_ID | DATE_FROM | DATE_TO | STATUS | MIN_DT_FROM_GRP | ROWNUMBERDESC | ROWNUMBERASC

    1. 7 January 2015 | C | 7 January 2015 | 1. 1

    2. December 3, 2014. 6 January 2015 | H | December 3, 2014. 1. 6

    3. October 3, 2014. December 2, 2014 | H | October 3, 2014. 2. 5

    4. May 30, 2014. October 2, 2014 | H | May 30, 2014. 3. 4

    5. May 29, 2014 | May 29, 2014 | H | May 29, 2014 | 4. 3

    6. April 16, 2014 | May 28, 2014. H | April 16, 2014 | 5. 2

    7. Tuesday, April 25, 2007 April 15, 2014. H | Tuesday, April 25, 2007 6. 1

    If you look at the output above, it dates back in the min_dt_from_grp column.

    MY question is if the analytical function calculates for a particular/set group, which is by statute and for what min (date_from) partition is 25-apr-2007 for the GROUP H (Status column), then why I have different values returned by the query above in the min_dt_from_grp column.

    Hello

    Because you have specified an ORDER BY clause for the analytical function. In doing so, you calculate the rows on a window. Since you have not specified a windowing clause, the default applies:

    RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW

  • Analytical functions: FIRST vs FIRST_VALUE

    Hello

    Can someone please help me understand the difference between PRIME and FIRST_VALUE in Anaytic functions.

    I tried below 2 queries, but I see the same output. The only difference I see is that the field of the SAL is ordered FIRST_VALUE, but not the FIRST.

    SELECT ename,

    DEPTNO,

    SAL,

    MIN (SAL) keep (dense_rank FIRST

    ORDER BY sal) by (deptno partition) FIRST

    EMP;

    SELECT ename,

    DEPTNO,

    SAL,

    FIRST_VALUE (SAL) over (partition BY deptno arrested by sal) FIRST

    EMP;

    With the help of: Windows 8.1

    Database Oracle 12 c Enterprise Edition Release 12.1.0.1.0 - 64 bit Production

    PL/SQL Release 12.1.0.1.0 - Production

    "CORE 12.1.0.1.0 Production."

    AMT for 64-bit Windows: Version 12.1.0.1.0 - Production

    NLSRTL Version 12.1.0.1.0 - Production

    Hello

    Here is an example of when you can use the FIRST analytic function.

    Say you want the average sal for each Department, but only for the first year (taken from the hiredate column) in the Department (i.e., the column called f in the query below).

    WITH got_hireyear AS

    (

    SELECT deptno and ename, sal, hiredate

    EXTRACT (YEAR FROM hiredate) AS hireyear

    FROM scott.emp

    )

    SELECT deptno, hireyear, hiredate, ename, sal

    AVG (sal) DUNGEON (DENSE_RANK FIRST ORDER BY hireyear)

    COURSES (PARTITION BY deptno

    ) In the FORM f

    FIRST_VALUE (sal) over (PARTITION BY deptno

    ORDER BY hireyear

    ) AS fv

    AVG (sal) over (PARTITION BY deptno

    hireyear

    ), A

    OF got_hireyear

    ORDER BY deptno

    hireyear

    ename

    ;

    Output:

    HIREYEAR ENAME SAL HIREDATE DEPTNO F FV HAS
    ------ ---------- ----------- ---------- ------ --------- ------ ---------
    10 1981 9 June 1981 CLARK 2450 2450 3725,00 3725.00
    10 1981 17 November 1981 KING 5000 3725,00 2450 3725.00
    10 1982 23 January 1982 MILLER 1300 3725,00 2450 1300.00

    20, 1980, 17 December 1980 SMITH 800 800.00 800.00 800
    20, 1981, 3 December 1981 FORD 3000 800.00 800 2987.50
    20, 1981, 2 April 1981 JONES 2975 800.00 800 2987.50
    20, 1987, 23 May 1987 ADAMS 1100 800.00 800 2050.00
    20, 1987, 19 April 1987 SCOTT 3000 800.00 800 2050.00

    30 1981 20 February 1981 ALLEN 1600 1566.67 950 1566.67
    May 30 1981 1st 1981 BLAKE 2850 1566.67 950 1566.67
    December 30 1981 3 1981 JAMES 950 1566.67 950 1566.67
    30 1981 28 - sep - 1981 MARTIN 1250 1566.67 950 1566.67
    30-08 - sep - 1981 1981 TURNER 1500 1566.67 950 1566.67
    30 1981 22 February 1981 WARD 1250 1566.67 950 1566.67

    The analytical FIRST_VALUE function can do (except in the very special case where only 1 row has the lowest hireyear, as in deptno = 20).  AVG analysis can do (except in the very special case that all lines have the same hireyear as in deptno = 30).

  • without analytic function

    Hello experts.

    I have data similar to what follows below

    create table t1
    (
      id number(30),
      description varchar(4000)
    
    
    );
    
    insert into t1 values (1, 'zone');
    insert into t1 values (2, 'small');
    
    
    create table t2
    (
       id number(30),
       place varchar(4000),
       info varchar(4000)
    
    );
    
    insert into t2 values (1, 'USA', 'Class U');
    insert into t2 values (1, 'Mexico', 'Class M');
    insert into t2 values (2, 'Germany', 'Class G');
    

    I need help with something similar to what follows below without using any analytic function

    Description of the ID info Place

    1 box USA class U

    Mexico 1 M class

    2 small Germany class G

    Any help is appreciated. Thank you

    Hello

    user13328581 wrote:

    ... I use an older version of oracle. Oracle 7.

    Normally, your developers are older than your software.

    You should be able to do what you want with a self-join on t2; a copy (d) should be displayed, and the other copy (c) contains all related values you need for comparison.

    SELECT t2d.id

    DECODE (t2d.place

    MAX (t2c.place)

    t1.description

    ) AS description

    t2d.place

    t2d.info

    FROM t1

    , t2 t2d - display

    t2 t2c - compare

    WHERE t1.id = t2d.id

    AND t2d.id = t2c.id

    GROUP BY t1.description

    t2d.id

    t2d.place

    t2d.info

    ORDER BY t2d.id

    t2d.place DESC

    ;

    Output:

    ID DESCRIPTION PLACE INFO

    ---------- -------------------- -------------------- --------------------

    1 box USA class U

    Mexico 1 M class

    2 small Germany class G

    I've tested this in Oracle 11, but it should work in Oracle 7.

    If this isn't the case, you may need to create a view.

  • Analytic Functions

    Hello

    Don't you know that it is the analytical function used to access an earlier date for example table RRODUCT

    Date amount CN code
    25/09/2012100020
    26/09/2013200015
    27/09/201110008
    28/09/2012200012
    29/09/201320002
    30/09/20041000

    4

    and this table contains more than 1000 lines in difeerent years now, I want to get the amount in a given year and the previous year like this 20 + 15 + 12 + 4

    I need analytical control that find the previous year 2012 if my year 2013 or find out if 2010 my 2011 yeaar

    You can use the YEAR-1 right? SHIFT of analytic function can be used to access the previous line. Your condition is to get the value of the previous year and previous row not. If this isn't what you are looking for then can you post output necessary for data provided?

  • Drive the analytic function


    Hello

    I have a doubt about this analytical function to lead,

    I have this table,

    create table test3 (no number, name varchar2 (30));


    Insert into TEST3 (NO, NAME) values (1, 'fen');
    Insert into TEST3 (NO, NAME) Values (3, 'DEN');
    Insert into TEST3 (NO, NAME) values (2, 'Sun');
    Insert into TEST3 (NO, NAME) values (2, 'sen');
    Insert into TEST3 (NO, NAME) values (1, 'end');
    COMMIT;

    I put like that with this request.


    Select lead don't (don't) over (partition by any order of name), name of test3.

    NO NAME

    1 fen
    end
    2 Sun
    Sen
    DEN

    But I need as below output, I am unable to get the third 'NO' that has a value, I get null for that, even if I partitioned
    by the 'NO '.

    NO NAME

    1 fen
    end
    2 Sun
    Sen
    3 DEN

    Please clear my doubt.

    Thanks in advance.

    Like this

    Select decode (NWR, 1, no, null) no

    name

    de)

    Select row_number() over (partition by any order by name) rno

    None

    name

    of test3

    )

  • Order of evaluation of analytic function

    Hello

    have question quite like this:

    with

    -This query selects a 'representative' acct_id by Group (about 300 lines in total)

    acct_repres as

    (

    Select distinct acct_id, origin_id, acct_parm_id of

    (

    Select a.*

    source_id

    , dense_rank() over (partition by order source_id by nulls first, acct_id acct_nbr origin_id) as odr

    account a join account_parm on (a.parm_id = ap.acct_parm_id) ap

    )

    where odr = 1

    )

    Select col1

    col2

    , (select accct_id from ar acct_repres where ar.acct_parm_id = t2.acct_parm_id) col3

    , col4 (select count (1) of acct_repres)

    of une_table t1

    Join other_table t2 on (...)

    And here it is. "Acct_repres" subquery returns more than 300 lines when it is run separately. But when it is used in CTE sometimes (depending on the execution plan) she seems to have that one line - the value in the column col4 is '1 ',.

    While the value of col3 is NULL for most of the cases.

    It looks like the the dense_rank function and the State 'where odr = 1' are evaluated at the end.

    When I use the hint to MATERIALIZE the result was the same.

    But when I put the result of account_repres in the dedicated table and use this table instead of CTE output is correct.

    What is a bug? Or I do something wrong?

    PS: my version of db is 11 GR 1 material (11.1.0.7).

    some unorganized comments:

    -analytical functions are evaluated towards the end of the execution ("' the last set of operations performed in a query with the exception of the final ORDER BY clause"- http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions004.htm")

    -but still the result of a SQL query must be deterministic, so I think that your results are not an expected behavior

    -the CBO has some problems with common table expressions (http://jonathanlewis.wordpress.com/2012/05/24/subquery-factoring-7/) if they are of great assistance in the structuring of complex queries. In these cases, you can avoid problems by using inline views

    -Your query uses the common table expressions in scalar subqueries and scalar subqueries are also likely to confuse the CBO. In addition, they are executed once for each row in your result set (or at least for each different correlation value) and can have a negative impact on the performance of the queries in many cases. Often, they can be replaced by outer joins.

    -you say that the suspicion of materialization brings you an erroneous result: the indicator object (online) gives you the correct results?

    Concerning

    Martin Preiss

  • Problem with analytical function for date

    Hi all

    ORCL worm:
    Oracle Database 11 g Enterprise Edition Release 11.2.0.2.0 - 64 bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    "CORE 11.2.0.2.0 Production."
    AMT for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production

    I have a problem with the analtical for the date function. I'm trying to group records based on timestamp, but I'm failing to do.
    Could you please help me find where I'm missing.
    This is the subquery. No issue with this. I'm just posting it for reference. 
    select sum(disclosed_cost_allocation.to_be_paid_amt) amt,
        substr(reference_data.ref_code,4,10) cd,
        to_char(external_order_status.status_updated_tmstp, 'DD-MON-YYYY HH24:MI:SS') tmstp,
        DISCLOSED_CLOSING_COST.DISCLOSED_CLOSING_COST_ID id
      FROM Deal.Fee_Mapping_Definition ,
        Deal.Fee_Index_Definition ,
        Deal.Fee_Closing_Cost_Item,
        Deal.Closing_Cost,
        Deal.Document_Generation_Request,
        deal.PRODUCT_REQUEST,
        deal.External_Order_Request,
        deal.External_Order_Status,
        deal. DISCLOSED_CLOSING_COST,
        deal.DISCLOSED_COST_ALLOCATION,
        deal.reference_data
      WHERE Fee_Mapping_Definition.Fee_Code                    = Fee_Index_Definition.Fee_Code
      AND Fee_Index_Definition.Fee_Index_Definition_Id         = Fee_Closing_Cost_Item.Fee_Index_Definition_Id
      AND Fee_Closing_Cost_Item.Closing_Cost_Id                = Closing_Cost.Closing_Cost_Id
      AND CLOSING_COST.PRODUCT_REQUEST_ID                      = Document_Generation_Request.Product_Request_Id
      AND closing_cost.product_request_id                      = product_request.product_request_id
      AND Product_Request.Deal_Id                              = External_Order_Request.Deal_Id
      AND external_order_request.external_order_request_id     = external_order_status.external_order_request_id
      AND external_order_request.external_order_request_id     = disclosed_closing_cost.external_order_request_id
      AND DISCLOSED_CLOSING_COST. DISCLOSED_CLOSING_COST_ID    = DISCLOSED_COST_ALLOCATION.DISCLOSED_CLOSING_COST_ID
      AND Fee_Index_Definition.Fee_Index_Definition_Id         = Disclosed_Closing_Cost.Fee_Index_Definition_Id
      AND Fee_Mapping_Definition.Document_Line_Series_Ref_Id   = Reference_Data.Reference_Data_Id
      AND Document_Generation_Request.Document_Package_Ref_Id IN (7392 ,2209 )
      AND External_Order_Status.Order_Status_Txt               = ('GenerationCompleted')
      AND Fee_Mapping_Definition.Document_Line_Series_Ref_Id  IN ( 7789, 7788,7596 )
      AND FEE_MAPPING_DEFINITION.DOCUMENT_TYPE_REF_ID          = 1099
      AND Document_Generation_Request.Product_Request_Id      IN
        (SELECT PRODUCT_REQUEST.PRODUCT_REQUEST_id
        FROM Deal.Disclosed_Cost_Allocation,
          Deal.Disclosed_Closing_Cost,
          DEAL.External_Order_Request,
          DEAL.PRODUCT_REQUEST,
          Deal.Scenario
        WHERE Disclosed_Cost_Allocation.Disclosed_Closing_Cost_Id = Disclosed_Closing_Cost.Disclosed_Closing_Cost_Id
        AND Disclosed_Closing_Cost.External_Order_Request_Id      = External_Order_Request.External_Order_Request_Id
        AND External_Order_Request.Deal_Id                        = Product_Request.Deal_Id
        AND product_request.scenario_id                           = scenario.scenario_id
        AND SCENARIO.SCENARIO_STATUS_TYPE_REF_ID                  = 7206
        AND product_request.servicing_loan_acct_num              IS NOT NULL
        AND product_request.servicing_loan_acct_num               = 0017498379
          --AND Disclosed_Cost_Allocation.Disclosed_Cost_Allocation_Id = 5095263
        )
      GROUP BY DISCLOSED_CLOSING_COST.DISCLOSED_CLOSING_COST_ID,
        External_Order_Status.Status_Updated_Tmstp,
        Reference_Data.Ref_Code,
        disclosed_cost_allocation.to_be_paid_amt
      order by 3 desc,
        1 DESC;
    
    Result:
    2000     1304-1399     28-JUL-2012 19:49:47     6880959
    312     1302     28-JUL-2012 19:49:47     6880958
    76     1303     28-JUL-2012 19:49:47     6880957
    2000     1304-1399     28-JUL-2012 18:02:16     6880539
    312     1302     28-JUL-2012 18:02:16     6880538
    76     1303     28-JUL-2012 18:02:16     6880537
    
    
    But, when I try to group the timestamp using analytical function,
    
    
    select amt 
            ,cd 
            ,rank() over(partition by tmstp order by tmstp desc) rn 
    from 
    (select sum(disclosed_cost_allocation.to_be_paid_amt) amt,
        substr(reference_data.ref_code,4,10) cd,
        to_char(external_order_status.status_updated_tmstp, 'DD-MON-YYYY HH24:MI:SS') tmstp,
        DISCLOSED_CLOSING_COST.DISCLOSED_CLOSING_COST_ID id
      FROM Deal.Fee_Mapping_Definition ,
        Deal.Fee_Index_Definition ,
        Deal.Fee_Closing_Cost_Item,
        Deal.Closing_Cost,
        Deal.Document_Generation_Request,
        deal.PRODUCT_REQUEST,
        deal.External_Order_Request,
        deal.External_Order_Status,
        deal. DISCLOSED_CLOSING_COST,
        deal.DISCLOSED_COST_ALLOCATION,
        deal.reference_data
      WHERE Fee_Mapping_Definition.Fee_Code                    = Fee_Index_Definition.Fee_Code
      AND Fee_Index_Definition.Fee_Index_Definition_Id         = Fee_Closing_Cost_Item.Fee_Index_Definition_Id
      AND Fee_Closing_Cost_Item.Closing_Cost_Id                = Closing_Cost.Closing_Cost_Id
      AND CLOSING_COST.PRODUCT_REQUEST_ID                      = Document_Generation_Request.Product_Request_Id
      AND closing_cost.product_request_id                      = product_request.product_request_id
      AND Product_Request.Deal_Id                              = External_Order_Request.Deal_Id
      AND external_order_request.external_order_request_id     = external_order_status.external_order_request_id
      AND external_order_request.external_order_request_id     = disclosed_closing_cost.external_order_request_id
      AND DISCLOSED_CLOSING_COST. DISCLOSED_CLOSING_COST_ID    = DISCLOSED_COST_ALLOCATION.DISCLOSED_CLOSING_COST_ID
      AND Fee_Index_Definition.Fee_Index_Definition_Id         = Disclosed_Closing_Cost.Fee_Index_Definition_Id
      AND Fee_Mapping_Definition.Document_Line_Series_Ref_Id   = Reference_Data.Reference_Data_Id
      AND Document_Generation_Request.Document_Package_Ref_Id IN (7392 ,2209 )
      AND External_Order_Status.Order_Status_Txt               = ('GenerationCompleted')
      AND Fee_Mapping_Definition.Document_Line_Series_Ref_Id  IN ( 7789, 7788,7596 )
      AND FEE_MAPPING_DEFINITION.DOCUMENT_TYPE_REF_ID          = 1099
      AND Document_Generation_Request.Product_Request_Id      IN
        (SELECT PRODUCT_REQUEST.PRODUCT_REQUEST_id
        FROM Deal.Disclosed_Cost_Allocation,
          Deal.Disclosed_Closing_Cost,
          DEAL.External_Order_Request,
          DEAL.PRODUCT_REQUEST,
          Deal.Scenario
        WHERE Disclosed_Cost_Allocation.Disclosed_Closing_Cost_Id = Disclosed_Closing_Cost.Disclosed_Closing_Cost_Id
        AND Disclosed_Closing_Cost.External_Order_Request_Id      = External_Order_Request.External_Order_Request_Id
        AND External_Order_Request.Deal_Id                        = Product_Request.Deal_Id
        AND product_request.scenario_id                           = scenario.scenario_id
        AND SCENARIO.SCENARIO_STATUS_TYPE_REF_ID                  = 7206
        AND product_request.servicing_loan_acct_num              IS NOT NULL
        AND product_request.servicing_loan_acct_num               = 0017498379
          --AND Disclosed_Cost_Allocation.Disclosed_Cost_Allocation_Id = 5095263
        )
      GROUP BY DISCLOSED_CLOSING_COST.DISCLOSED_CLOSING_COST_ID,
        External_Order_Status.Status_Updated_Tmstp,
        Reference_Data.Ref_Code,
        disclosed_cost_allocation.to_be_paid_amt
      order by 3 desc,
        1 DESC);
    
    Result:
    312     1302            1
    2000     1304-1399     1
    76     1303            1
    312     1302            1
    2000     1304-1399     1
    76     1303            1 
    
    
    Required output:
    312     1302            1
    2000     1304-1399     1
    76     1303            1
    312     1302            2
    2000     1304-1399     2
    76     1303            2
    THX
    Rod.

    Hey, Rod,

    My guess is that you want:

    , dense_rank () over (order by  tmstp  desc)  AS rn 
    

    RANK means you'll jump numbers when there is a link. For example, if all 3 rows have the exact same last tmstp, all 3 rows would be assigned number 1, GRADE would assign 4 to the next line, but DENSE_RANK attributes 2.

    "PARTITION x" means that you are looking for a separate series of numbers (starting with 1) for each value of x. If you want just a series of numbers for the entire result set, then do not use a PARTITION BY clause at all. (PARTITION BY is never required.)
    Maybe you want to PARTITIONNER IN cd. I can't do it without some examples of data, as well as an explanation of why you want the results of these data.
    You certainly don't want to PARTITION you BY the same expression ORDER BY; It simply means that all the lines are tied for #1.

    I hope that answers your question.
    If not, post a small example data (CREATE TABLE and only relevant columns, INSERT statements) for all of the tables involved and also publish outcomes from these data.
    Explain, using specific examples, how you get these results from these data.
    Simplify the problem as much as possible.
    Always tell what version of Oracle you are using.
    See the FAQ forum {message identifier: = 9360002}

    Published by: Frank Kulash, August 1, 2012 13:20

  • application of analytic function

    SELECT C, CAL Table;

    THE OUTPUT IS:
    C CAL
    C1 701
    C1 702
    C1 703
    C2 701
    C2 702
    C3 701
    C3 702
    C3 703

    I want using the analytical function and
    get outside like below

    C CAL
    C1 703
    C2 702
    C3 703

    Please guide me how I can get

    Thank you

    Hello

    with data as (
    select 'C1' C, 701 CAL from dual union all
    select 'C1', 702 from dual union all
    select 'C1', 703 from dual union all
    select 'C2', 701 from dual union all
    select 'C2', 702 from dual union all
    select 'C3', 701 from dual union all
    select 'C3', 702 from dual union all
    select 'C3', 703 from dual
    )
    
    select
     c
    ,max(cal)
    from data
    group by
     c
    
    C;MAX(CAL)
    C1;703
    C2;702
    C3;703
    
    You can do this with analytics e.g.
    
    select
     c
    ,max(cal) keep (dense_rank first order by c) over (partition by c ) cal
    from dat
    
    C;CAL
    C1;703
    C1;703
    C1;703
    C2;702
    C2;702
    C3;703
    C3;703
    C3;703
    
    but then you will always do a distinct finally
    
    select distinct
     c
    ,max(cal) keep (dense_rank first order by c) over (partition by c ) cal
    from data
    
    C;CAL
    C3;703
    C2;702
    C1;703
    
    Therefore, at last, you will prefer the group by anyway :-)
    

    Concerning

  • the date of consolidation extends using analytic functions

    I am trying to establish how long a person was in a situation the data looks like this
    person Locator recorded_date
    --------------------------------------------------------
    01/01/2012 10:10 LOC_A PERSON_X
    03/01/2012 PERSON_X LOC_A 15:10
    04/01/2012 PERSON_X LOC_B 02:00
    05/01/2012 PERSON_X LOC_B 11:10
    06/01/2012 PERSON_X LOC_A 03:10

    What I want in the output. I want to divide it into 3 bays. What do I get with min and rank is a grouping of the last loc_a with the first that goes on the average time they were in a different location.
    Start anyone to date date stop
    -----------------------------------------------------------------------------------
    01/01/2012 10:10 01/04/2012 PERSON_X LOC_A 02:00
    04/01/2012 02:00 06/01/2012 PERSON_X LOC_B 03:10
    06/01/2012 PERSON_X LOC_A 03:10

    Hello

    DanU says:
    Thanks Frank! This was extremely helpful. I probably get the final stages. The only piece I am missing is having the end defined by the following date recorded date. So I might try your query with the lead function.

    Sorry, I don't have the sense of end_date.
    You're right; all you need is the analytical function of LEAD to get:

    WITH     got_grp          AS
    (
         SELECT     recorded_date, cqm_category, pat_acct_num
         ,     ROW_NUMBER () OVER ( PARTITION BY  pat_acct_num
                                   ORDER BY          recorded_date
                           )
               -     ROW_NUMBER () OVER ( PARTITION BY  pat_acct_num
                                         ,                    cqm_category
                                   ORDER BY          recorded_date
                           )         AS grp
         FROM    export_table
    )
    SELECT       MIN (recorded_date)               AS start_date
    ,       LEAD (MIN (recorded_date)) OVER ( PARTITION BY  pat_acct_num
                                                 ORDER BY         MIN (recorded_date)
                               )     AS stop_date
    ,       cqm_category
    ,       pat_acct_num
    FROM       got_grp
    GROUP BY  pat_acct_num, cqm_category, grp
    ORDER BY  pat_acct_num, start_date
    ;
    

    It's almost the same query as what I posted before.
    Apart from substituting your new table and column names, the only change I made was how stop_date is defined in the main query.

  • Ask in analytic function

    Hello
    I use under request

    Select * from
    (
    SELECT FLAG, S_DATE, ROW_NUMBER () OVER (PARTITION OF)
    order by S_DATE, FLAG flag) d
    Table_name FROM
    ORDER BY S_DATE
    );

    below the output which gives

    Flag | S_DATE | D
    Y     | 27/02/2012 05:33 |     1
    Y     | 27/02/2012 05:34 |     2
    Y     | 27/02/2012 05:34 |     3
    N     | 27/02/2012 05:34 |     1
    N     | 27/02/2012 05:34 |     2
    N     | 27/02/2012 05:34 |     3
    N     | 27/02/2012 05:35 |     4
    N     | 27/02/2012 05:35 |     5
    Y     |  27/02/2012 05:36 |     4
    Y     |  27/02/2012 05:36 |     5
    Y     |  27/02/2012 05:36 |     6


    But I want the output to be below order is changed in the last 3 rows

    Flag | S_DATE | D

    Y     | 27/02/2012 05:33 |     1
    Y     | 27/02/2012 05:34 |     2
    Y     | 27/02/2012 05:34 |     3
    N     | 27/02/2012 05:34 |     1
    N     | 27/02/2012 05:34 |     2
    N     | 27/02/2012 05:34 |     3
    N     | 27/02/2012 05:35 |     4
    N     | 27/02/2012 05:35 |     5
    Y     |  27/02/2012 05:36 |     1
    Y     |  27/02/2012 05:36 |     2
    Y     |  27/02/2012 05:36 |     3

    I used the analytical function.

    Published by: user8858890 on February 27, 2012 02:00

    Hello

    user8858890 wrote:
    ... But I want the output to be below order is changed in the last 3 rows

    Flag | S_DATE | D

    Y     | 27/02/2012 05:33 |     1
    Y     | 27/02/2012 05:34 |     2
    Y     | 27/02/2012 05:34 |     3
    N     | 27/02/2012 05:34 |     1
    N     | 27/02/2012 05:34 |     2
    N     | 27/02/2012 05:34 |     3
    N     | 27/02/2012 05:35 |     4
    N     | 27/02/2012 05:35 |     5
    Y     |  27/02/2012 05:36 |     1
    Y     |  27/02/2012 05:36 |     2
    Y     |  27/02/2012 05:36 |     3

    Why do you want the last 3 lines (who have the flag = 'Y') to be numbered 1, 2, 3, when the first 3 lines (who also have the flag = "Y") already have the numbers 1, 2 and 3? Do you want a separate whenevever #1 there is a group of consecutive lines (when ordered by s_date) who have the same flag? If so, you need to identify the groups, like this:

    WITH     got_grp_id     AS
    (
         SELECT     flag
         ,     s_date
         ,     ROWID               AS r_id
         ,     ROW_NUMBER () OVER ( ORDER BY      s_date
                                   ,                  ROWID
                           )
               - ROW_NUMBER () OVER ( PARTITION BY  flag
                                         ORDER BY          s_date
                             ,               ROWID
                           )    AS grp_id
         FROM    table_name
    )
    SELECT       flag
    ,       s_date
    ,       ROW_NUMBER () OVER ( PARTITION BY  flag
                                 ,          grp_id
                          ORDER BY          s_date
                          ,               r_id
                        )      AS d
    FROM      got_grp_id
    ORDER BY  s_date
    ,            grp_id
    ,       d
    ;
    

    This assumes that each line can be unique idendified, so that the order is unambiguous. In your sample data, there is completely identical lines, so I used the ROWID to uniquely identify the lines. Using ROWID suppose table_name is a real table, not just a game of results.

    I hope that answers your question.
    If not, post a small example of data (CREATE TABLE and only relevant columns, INSERT statements) for all of the tables involved and the results desired from these data.
    Explain, using specific examples, how you get these results from these data.
    Always tell what version of Oracle you are using.

Maybe you are looking for

  • How to send a Web page and link page to someone?

    In Safari, there is a mail icon do that quickly. I'm new to firefox. How to do this?Thank you

  • Find email when you re - enter address in Thunderbird

    In collaboration with the e-mail inadvertently, I had one of my addresses appears twice. I removed what appeared to be the latest list but when it has been removed it is also deleted my Inbox and all messages, that I intended to leave. I can see the

  • hardware upgrade to the 15-g010nr

    I bought a HP 15-g010nr a few weeks ago and I was wondering What other processors I can update and the amount of RAM is support? The CPU is currently AMD E2-series E2-6110 is tehre any better for this laptop processor?

  • 20wd: monitor does not display mac

    Hello I have a HP 20wd screen I got a while ago because I had a chance running 10 windows desktop computer. Now I have a Mac Book Air 11 inch running OS X 10 update. The problem is that the screen displays only the windows desktop, not the Mac. The m

  • Upgrade SSD or hybrid to my T510

    Hi all I am considering upgrading my hard drive to my old T510 to 2 years since my original 300 GB drive fills I am considering getting an ssd or a Seagate hybrid drive, but I'm not sure. I want a faster startup time and improved performance. Right n