Question of analytic function

Hello world

I used the HR diagram, to test 2 Querys

1 - SELECT deptno, empno, sal.
Sum (SAL) OVER (PARTITION BY deptno ORDER BY sal RANGE
PREVIOUS BETWEEN UNBOUNDED PRECEDING AND sal/2) CNT_LT_HALF
WCP
WHERE deptno IN (20, 30)
ORDER BY deptno, sal

the first request has been authorized,

but when I pulled the second, results was not clear, can someone explain me what schould be the result of the question (because I had no data):

SELECT deptno, empno, sal,
Sum (SAL) OVER (PARTITION BY deptno ORDER BY sal RANGE
PREVIOUS BETWEEN UNBOUNDED PRECEDING AND (sal)) CNT_LT_HALF
WCP
WHERE deptno IN (20, 30)
ORDER BY deptno, sal

MDK.

Hello

MDK wrote:
Hello world

I used the HR diagram, to test 2 Querys

Your hr schema is not the same as mine. It seems I have a similar picture in my scheme of scott.
>

1 - SELECT deptno, empno, sal.
Sum (SAL) OVER (PARTITION BY deptno ORDER BY sal RANGE
PREVIOUS BETWEEN UNBOUNDED PRECEDING AND sal/2) CNT_LT_HALF
WCP
WHERE deptno IN (20, 30)
ORDER BY deptno, sal

the first request has been authorized,

but when I pulled the second, results was not clear, can someone explain me what schould be the result of the question (because I had no data):

Do you mean that the cnt_lt_half column is always NULL.

SELECT deptno, empno, sal,
Sum (SAL) OVER (PARTITION BY deptno ORDER BY sal RANGE
PREVIOUS BETWEEN UNBOUNDED PRECEDING AND (sal)) CNT_LT_HALF
WCP
WHERE deptno IN (20, 30)
ORDER BY deptno, sal

We will do both calculations in the same query:

SELECT  deptno
,     empno
,     sal
,     SUM (sal) OVER ( PARTITION BY  deptno
                           ORDER BY      sal
                RANGE BETWEEN UNBOUNDED PRECEDING
                      AND     (sal/2)       PRECEDING
               ) AS cnt_lt_half
,     SUM (sal) OVER ( PARTITION BY  deptno
                           ORDER BY      sal
                RANGE BETWEEN UNBOUNDED PRECEDING
                      AND     (sal)       PRECEDING
               ) AS cnt_lt_full
FROM      scott.emp     -- or whatever
WHERE      deptno IN (20, 30)
ORDER BY  deptno
,            sal
;

Output:

.   DEPTNO      EMPNO        SAL CNT_LT_HALF CNT_LT_FULL
---------- ---------- ---------- ----------- -----------
        20       7369        800
        20       7876       1100
        20       7566       2975        1900
        20       7788       3000        1900
        20       7902       3000        1900
        30       7900        950
        30       7521       1250
        30       7654       1250
        30       7844       1500
        30       7499       1600
        30       7698       2850        3450

So you understand cnt_lt_half, but you do not understand cnt_lt_full.
Let us look at cnt_lt_half.
On the first line, where empno = 7369, sal is 800. The BEACH of cnt_lt_half is all the lines where the sal is at least (800/2) = 400 before the current sal. The current sal is 800, which means that the range is up to (and including) 800-400 = 400. There is nobody in this Department with a low sal, so cnt_lt_half is NULL.
On the last row, where empno = 7698, sal is 2850. The BEACH of cnt_lt_half is all the lines where the sal is at least (2850/2) = 1425 before the current sal. The current sal is 2850, which means that the range is up to (and including) 2850-1425 = 1425. There are 3 rows in this Department, with wages high in this range 950 + 1250 + 1250 = 3450, and cnt_lt_half is 3450.

Now let's look at cnt_lt_full.
On the first line, where empno = 7369, sal is 800. The RANGE for cnt_lt_half will be all the lines where the sal is at least 800 before the current sal. The current sal is 800, which means that the range is up to (and including) 800-800 = 0. There is nobody in this Department with a low sal, so cnt_lt_full is NULL.
On the last row, where empno = 7698, sal is 2850. The BEACH of cnt_lt_half will be all the lines where the sal is at least 2850 before the current sal. The current sal is 2850, which means that the range is up to (and including) 2850 2850 = 0. There is no line in this Department with wages in this range, so cnt_lt_full is NULL.

The highlight of the RANGE BETWEEN... AND PREVIOUS sal will always be 0. If sal is always greater than 0, then it will be never all rows in this range.

Did you expect anything else? If Yes, what?
You want to get specific results? If Yes, what?

Tags: Database

Similar Questions

  • I hope that the question simple Analytic Functions

    Hi, we are on 10.2.0.4 under Red Hat Linux.

    I have a situation I don't know can be answered better with analytics, but I'm fighting to get the best solution.

    First of all defined data:

    CREATE TABLE POSITION_ASGN
    (THE VARCHAR2 (5) OF EMPLID NOT NULL)
    DATE OF ASOFDATE
    , ACT_POSN VARCHAR2 (5) not null
    , SUB_POSN VARCHAR2 (5) not null
    RPT_POSN VARCHAR2 (5) not null)
    ;
    INSERT INTO POSITION_ASGN
    VALUES (' EMP01', to_date('01-JAN-2013','dd-mon-yyyy'), '00065', '00065','00033 ' ")
    ;
    INSERT INTO POSITION_ASGN
    VALUES (' EMP01', to_date('01-FEB-2013','dd-mon-yyyy'), '00096', '00065','00054 ' ")
    ;
    INSERT INTO POSITION_ASGN
    VALUES (' EMP02', to_date('01-JAN-2013','dd-mon-yyyy'), '00096', '00096','00054 ' ")
    ;
    INSERT INTO POSITION_ASGN
    VALUES (' EMP03', to_date('01-JAN-2013','dd-mon-yyyy'), '00103', '00096','00054 ' ")
    ;
    INSERT INTO POSITION_ASGN
    VALUES (' EMP04', to_date('01-JAN-2013','dd-mon-yyyy'), '00117', '00096','00054 ' ")
    ;
    INSERT INTO POSITION_ASGN
    VALUES ('MGR01', to_date('01-JAN-2013','dd-mon-yyyy'), '00054', ' 00054', ' 00017')
    ;
    INSERT INTO POSITION_ASGN
    VALUES ('MGR02', to_date('01-JAN-2013','dd-mon-yyyy'), '00054', ' 00054', ' 00017')
    ;



    Titles of the table where a person is in the organization.
    ASOFDATE - history of tracks over time
    ACT_POSN - acting, where the person is physically
    SUB_POSN - substantive position, where the person should be, usually the same as ACT_POSN, but if you're ready to someone else it's your original position
    RPT_POSN - that make you in your acting

    What I do is on a given date, with a number of given position return a data set that shows all holders acting in a column, all substantive holders in a second column and all holders acting position reports in a third column.

    Ignoring the notion of date at the moment, I can create a simple union query to:

    SELECT 'ACTING' 'MODE', 'NUMBER', EMPLID ACT_POSN
    OF POSITION_ASGN
    WHERE ACT_POSN = '00096'
    UNION
    SELECT 'SUBST' 'MODE', SUB_POSN, EMPLID
    OF POSITION_ASGN
    WHERE SUB_POSN = '00096'
    UNION
    SELECT "MGR" 'MODE', A.ACT_POSN, A.EMPLID
    OF POSITION_ASGN A, POSITION_ASGN B
    Where A.ACT_POSN = B.RPT_POSN
    AND B.ACT_POSN = '00096'


    Produce a single output
    ACTING EMP01 00096
    ACTING 00096 EMP02
    BISHOP 00054 MGR01
    BISHOP 00054 MGR02
    SUBST 00096 EMP02
    SUBST 00096 EMP03
    SUBST 00096 EMP04

    But I can throw it in a table of 3 lines according to the value of 'MODE' so that I find myself with
    MANAGER - BACKGROUND - ACTING
    EMP01 - EMP02 - MGR01
    EMP02 - EMP03 - MGR02
    vacuum - EMP04 - white

    I could see how I could generate a Rank() on parittion by NUMBER, MODE to find out the fact that I need 3 rows (because there are three background PGE in 00096), maybe as an argument of the no_lig object and then select join based on the MODE and the value of the no_lig argument, but it all feels so round the houses that I wonder if doe analytics that can better.

    Does anyone have advice / sample code I could bone?

    Published by: Paula Scorchio on 17 March 2013 22:49

    I forgot to say that this is my rough code that kind of did what I need
    SELECT A.RN, MAX (DECODE (B.RN, A.RN, DECODE (PMODE, "ACTING", EMPLID, ' '))) 'INTERIM', MAX (DECODE (B.RN, A.RN, DECODE (PMODE, "SUBST", EMPLID, ' '))) 'SUBST', MAX (DECODE (B.RN, A.RN, DECODE (PMODE, "MGR", EMPLID, ' '))) 'MG '.
    Of
    (SELECT ROWNUM 'RN' FROM DUAL CONNECT BY LEVEL < =)
    (SELECT MAX (RN) IN)
    SELECT 'ACT', 'VALUE', ACT_POSN, EMPLID, RANK() OVER (PARTITION OF ACT_POSN ORDER OF EMPLID) "RN".
    OF POSITION_ASGN
    WHERE ACT_POSN = '00096'
    UNION
    SELECT 'SUBST' 'VALUE', SUB_POSN, EMPLID, RANK() OVER (PARTITION OF SUB_POSN ORDER OF EMPLID) "RN".
    OF POSITION_ASGN
    WHERE SUB_POSN = '00096'
    UNION
    SELECT "VALUE'" MGR", A.ACT_POSN, A.EMPLID, RANK() OVER (PARTITION BY A.ACT_POSN OF A.EMPLID ORDER)"RN ".
    OF POSITION_ASGN A, POSITION_ASGN B
    Where A.ACT_POSN = B.RPT_POSN
    AND B.ACT_POSN = '00096'))) (),
    SELECT 'ACT', 'VALUE', ACT_POSN, EMPLID, RANK() OVER (PARTITION OF ACT_POSN ORDER OF EMPLID) "RN".
    OF POSITION_ASGN
    WHERE ACT_POSN = '00096'
    UNION
    SELECT 'SUBST' 'VALUE', SUB_POSN, EMPLID, RANK() OVER (PARTITION OF SUB_POSN ORDER OF EMPLID) "RN".
    OF POSITION_ASGN
    WHERE SUB_POSN = '00096'
    UNION
    SELECT "VALUE'" MGR", A.ACT_POSN, A.EMPLID, RANK() OVER (PARTITION BY A.ACT_POSN OF A.EMPLID ORDER)"RN ".
    OF POSITION_ASGN A, POSITION_ASGN B
    Where A.ACT_POSN = B.RPT_POSN
    AND B.ACT_POSN = '00096') B
    A.rn group

    Hello

    Thanks for posting the CREATE TABLE and INSERT statements; It's very useful!

    That means each line of output represent? It seems that the nth line output has the distinct interim nth emplid, the nth emplid background and the nth Manager happens to one of the many. The columns of each row are all related to the same ('00096' in this case) target id, but other than that, they seem to have no real connection between them.
    It looks like a Fixed price query , like this:

    WITH     targets          AS
    (
         SELECT     '00096'     AS target_id     FROM dual
    )
    ,     all_modes     AS
    (
         SELECT     CASE LEVEL
                  WHEN  1  THEN  'ACT'
                  WHEN  2  THEN  'SUB'
                  WHEN  3  THEN  'RPT'
              END     AS mode_abbr
         FROM     dual
         CONNECT BY     LEVEL     <= 3
    )
    ,     unpivoted_data     AS
    (
         SELECT  am.mode_abbr
         ,     t.target_id
         ,     CASE
                  WHEN  am.mode_abbr = 'ACT'
                   AND  e.act_posn   = t.target_id  THEN  e.emplid
                  WHEN  am.mode_abbr = 'SUB'
                   AND  e.sub_posn   = t.target_id  THEN  e.emplid
                  WHEN  am.mode_abbr = 'RPT'
                   AND  e.act_posn   = t.target_id  THEN  m.emplid
              END     AS emplid
         FROM               targets     t
         CROSS JOIN        all_modes     am
         JOIN                  position_asgn  e  ON   t.target_id IN ( e.act_posn
                                                  , e.sub_posn
                                              )
         LEFT OUTER JOIN  position_asgn     m  ON   m.act_posn  = e.rpt_posn
    )
    ,     got_r_num     AS
    (
         SELECT     u.*
         ,     DENSE_RANK () OVER ( PARTITION BY  target_id
                                   ,                    mode_abbr
                             ORDER BY        emplid
                           )         AS r_num
         FROM     unpivoted_data  u
         WHERE     emplid  IS NOT NULL
    )
    SELECT       MIN (CASE WHEN mode_abbr = 'ACT' THEN emplid END)     AS acting
    ,       MIN (CASE WHEN mode_abbr = 'SUB' THEN emplid END)     AS substantive
    ,       MIN (CASE WHEN mode_abbr = 'RPT' THEN emplid END)     AS manager
    ,       target_id
    FROM       got_r_num
    GROUP BY  target_id, r_num
    ORDER BY  target_id, r_num
    ;
    

    There may be times when you'll want to do this for multiple targets together, is not only a target (for example, ' 00096'). The first auxiliary request, target, specify any number you want, including 1
    Instead of making a UNION 3 way to unpivot data, it will be probably more effective to cross the join to a 'table' with 3 rows. What's the next sub-querry, all_modes, for.
    The 3rd subquery, unpivoted_data, finds the relevant lines in position_asgn and unpivots or divided) 3 ranks, for the 3 modes.
    The following subquery, got_r_num, gives the line number for each distinct to display value. This could be done in the previous subquery, no cross data dynamic, but to avoid repeating the expression BOX great, I used a separate subquery. If you don't care the order of the items, you can easily use DENSE_RANK in unpivoted_data.
    The application main re - merges the nth lines for each target 1 row. If you don't have only one target, then, of course, then you need not display target_id.

    For more information on queries to fixed price, see {message identifier: = 4459268}

  • A question about the analytical function used with the GROUP BY clause in SHORT

    Hi all

    I created the following table named myenterprise
    CITY       STOREID    MONTH_NAME TOTAL_SALES            
    ---------- ---------- ---------- ---------------------- 
    paris      id1        January    1000                   
    paris      id1        March      7000                   
    paris      id1        April      2000                   
    paris      id2        November   2000                   
    paris      id3        January    5000                   
    london     id4        Janaury    3000                   
    london     id4        August     6000                   
    london     id5        September  500                    
    london     id5        November   1000
    If I want to find which is the total sales by city? I'll run the following query
    SELECT city, SUM(total_sales) AS TOTAL_SALES_PER_CITY
    FROM myenterprise
    GROUP BY city
    ORDER BY city, TOTAL_SALES_PER_CITY;
    that works very well and produces the expected result, i.e.
    CITY       TOTAL_SALES_PER_CITY   
    ---------- ---------------------- 
    london     10500                  
    paris      17000            
    Now in one of my books SQL (Mastering Oracle SQL) I found another method by using the SUM, but this time as an analytic function. Here's what the method of the book suggests as an alternative to the problem:
    SELECT city, 
           SUM(SUM(total_sales)) OVER (PARTITION BY city) AS TOTAL_SALES_PER_CITY
    FROM myenterprise
    GROUP BY city
    ORDER BY city, TOTAL_SALES_PER_CITY;
    I know that the analytic functions are executed after the GROUP BY clause has been transformed completely and Unlike regular aggregate functions, they return their result for each line belonging to the partitions specified in the partition clause (if there is a defined partition clause).

    Now my problem is that I do not understand what we have to use two functions SUM? If we only use one only, i.e.
    SELECT city, 
           SUM(total_sales) OVER (PARTITION BY city) AS TOTAL_SALES_PER_CITY
    FROM myenterprise
    GROUP BY city
    ORDER BY city, TOTAL_SALES_PER_CITY;
    This generates the following error:
    Error starting at line 2 in command:
    SELECT city, 
           SUM(total_sales) OVER (PARTITION BY city) AS TOTAL_SALES_PER_CITY
    FROM myenterprise
    GROUP BY city
    ORDER BY city, TOTAL_SALES_PER_CITY
    Error at Command Line:2 Column:11
    Error report:
    SQL Error: ORA-00979: not a GROUP BY expression
    00979. 00000 -  "not a GROUP BY expression"
    *Cause:    
    *Action:
    The error is generated for the line 2 column 11 which is, for the expression SUM (total_sales), well it's true that total_sales does not appear in the GROUP BY clause, but this should not be a problem, it has been used in an analytical function, so it is evaluated after the GROUP BY clause.

    So here's my question:

    Why use SUM (SUM (total_sales)) instead of SUM (total_sales)?


    Thanks in advance!
    :)





    In case you are interested, that's my definition of the table:
    DROP TABLE myenterprise;
    CREATE TABLE myenterprise(
    city VARCHAR2(10), 
    storeid VARCHAR2(10),
    month_name VARCHAR2(10),
    total_sales NUMBER);
    
    INSERT INTO myenterprise(city, storeid, month_name, total_sales)
      VALUES ('paris', 'id1', 'January', 1000);
    INSERT INTO myenterprise(city, storeid, month_name, total_sales)
      VALUES ('paris', 'id1', 'March', 7000);
    INSERT INTO myenterprise(city, storeid, month_name, total_sales)
      VALUES ('paris', 'id1', 'April', 2000);
    INSERT INTO myenterprise(city, storeid, month_name, total_sales)
      VALUES ('paris', 'id2', 'November', 2000);
    INSERT INTO myenterprise(city, storeid, month_name, total_sales)
      VALUES ('paris', 'id3', 'January', 5000);
    INSERT INTO myenterprise(city, storeid, month_name, total_sales)
      VALUES ('london', 'id4', 'Janaury', 3000);
    INSERT INTO myenterprise(city, storeid, month_name, total_sales)
      VALUES ('london', 'id4', 'August', 6000);
    INSERT INTO myenterprise(city, storeid, month_name, total_sales)
      VALUES ('london', 'id5', 'September', 500);
    INSERT INTO myenterprise(city, storeid, month_name, total_sales)
      VALUES ('london', 'id5', 'November', 1000);
    Edited by: dariyoosh on April 9, 2009 04:51

    It is clear that thet Analytics is reduntant here...
    You can even use AVG or any analytic function...

    SQL> SELECT city,
      2         avg(SUM(total_sales)) OVER (PARTITION BY city) AS TOTAL_SALES_PER_CITY
      3  FROM myenterprise
      4  GROUP BY city
      5  ORDER BY city, TOTAL_SALES_PER_CITY;
    
    CITY       TOTAL_SALES_PER_CITY
    ---------- --------------------
    london                    10500
    paris                     17000
    
  • Why the different values for an analytic function of the same group/game

    I have the suite of table I'll be using.

    Select * from table1;

    REC_ID | STATUS | DATE_FROM | DATE_TO

    1. C | 7 January 2015 |

    2. H | December 3, 2014. 6 January 2015

    3. H | October 3, 2014. December 2, 2014

    4. H | May 30, 2014. October 2, 2014

    5. H | May 29, 2014 | May 29, 2014

    6. H | April 16, 2014 | May 28, 2014

    7. H | Tuesday, April 25, 2007 April 15, 2014

    INSERT statement if you need.

    TOGETHER TO DEFINE

    CREATE THE TABLE1 TABLE:

    (

    NUMBER OF REC_ID,

    VARCHAR2 (1 BYTE) STATUS NOT NULL,.

    DATE_FROM DATE NOT NULL,

    DATE OF DATE_TO

    );

    Insert into TABLE1

    (REC_ID, STATUS, DATE_FROM)

    Values

    (1, 'C', TO_DATE (7 JANUARY 2015 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'));))

    Insert into TABLE1

    (REC_ID, STATUS, DATE_FROM, DATE_TO)

    Values

    (2, 'H', TO_DATE (3 DECEMBER 2014 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'), TO_DATE (6 JANUARY 2015 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'));))

    Insert into TABLE1

    (REC_ID, STATUS, DATE_FROM, DATE_TO)

    Values

    (3, 'H', TO_DATE (3 OCTOBER 2014 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'), TO_DATE (2 DECEMBER 2014 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'));))

    Insert into TABLE1

    (REC_ID, STATUS, DATE_FROM, DATE_TO)

    Values

    (4, 'H', TO_DATE (MAY 30, 2014 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'), TO_DATE (2 OCTOBER 2014 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'));))

    Insert into TABLE1

    (REC_ID, STATUS, DATE_FROM, DATE_TO)

    Values

    (5, 'H', TO_DATE (29 MAY 2014 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'), TO_DATE (29 MAY 2014 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'));))

    Insert into TABLE1

    (REC_ID, STATUS, DATE_FROM, DATE_TO)

    Values

    (6, 'H', TO_DATE (APRIL 16, 2014 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'), TO_DATE (28 MAY 2014 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'));))

    Insert into TABLE1

    (REC_ID, STATUS, DATE_FROM, DATE_TO)

    Values

    (7, 'H', TO_DATE (APRIL 25, 2007 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'), TO_DATE (APRIL 15, 2014 00:00:00 ',' DD/MM/YYYY HH24:MI:SS'));))

    COMMIT;

    I will exercise more analytical query...

    Select rec_id date_from, date_to, status,

    min (date_from) over (partition by order of status by date_from desc) min_dt_from_grp,

    ROW_NUMBER() over (partition by order of status by date_from desc) rownumberdesc,

    ROW_NUMBER() over (partition by order of status by ASC date_from) rownumberasc

    FROM table1;

    the query result

    REC_ID | DATE_FROM | DATE_TO | STATUS | MIN_DT_FROM_GRP | ROWNUMBERDESC | ROWNUMBERASC

    1. 7 January 2015 | C | 7 January 2015 | 1. 1

    2. December 3, 2014. 6 January 2015 | H | December 3, 2014. 1. 6

    3. October 3, 2014. December 2, 2014 | H | October 3, 2014. 2. 5

    4. May 30, 2014. October 2, 2014 | H | May 30, 2014. 3. 4

    5. May 29, 2014 | May 29, 2014 | H | May 29, 2014 | 4. 3

    6. April 16, 2014 | May 28, 2014. H | April 16, 2014 | 5. 2

    7. Tuesday, April 25, 2007 April 15, 2014. H | Tuesday, April 25, 2007 6. 1

    If you look at the output above, it dates back in the min_dt_from_grp column.

    MY question is if the analytical function calculates for a particular/set group, which is by statute and for what min (date_from) partition is 25-apr-2007 for the GROUP H (Status column), then why I have different values returned by the query above in the min_dt_from_grp column.

    Hello

    Because you have specified an ORDER BY clause for the analytical function. In doing so, you calculate the rows on a window. Since you have not specified a windowing clause, the default applies:

    RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW

  • Order of evaluation of analytic function

    Hello

    have question quite like this:

    with

    -This query selects a 'representative' acct_id by Group (about 300 lines in total)

    acct_repres as

    (

    Select distinct acct_id, origin_id, acct_parm_id of

    (

    Select a.*

    source_id

    , dense_rank() over (partition by order source_id by nulls first, acct_id acct_nbr origin_id) as odr

    account a join account_parm on (a.parm_id = ap.acct_parm_id) ap

    )

    where odr = 1

    )

    Select col1

    col2

    , (select accct_id from ar acct_repres where ar.acct_parm_id = t2.acct_parm_id) col3

    , col4 (select count (1) of acct_repres)

    of une_table t1

    Join other_table t2 on (...)

    And here it is. "Acct_repres" subquery returns more than 300 lines when it is run separately. But when it is used in CTE sometimes (depending on the execution plan) she seems to have that one line - the value in the column col4 is '1 ',.

    While the value of col3 is NULL for most of the cases.

    It looks like the the dense_rank function and the State 'where odr = 1' are evaluated at the end.

    When I use the hint to MATERIALIZE the result was the same.

    But when I put the result of account_repres in the dedicated table and use this table instead of CTE output is correct.

    What is a bug? Or I do something wrong?

    PS: my version of db is 11 GR 1 material (11.1.0.7).

    some unorganized comments:

    -analytical functions are evaluated towards the end of the execution ("' the last set of operations performed in a query with the exception of the final ORDER BY clause"- http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions004.htm")

    -but still the result of a SQL query must be deterministic, so I think that your results are not an expected behavior

    -the CBO has some problems with common table expressions (http://jonathanlewis.wordpress.com/2012/05/24/subquery-factoring-7/) if they are of great assistance in the structuring of complex queries. In these cases, you can avoid problems by using inline views

    -Your query uses the common table expressions in scalar subqueries and scalar subqueries are also likely to confuse the CBO. In addition, they are executed once for each row in your result set (or at least for each different correlation value) and can have a negative impact on the performance of the queries in many cases. Often, they can be replaced by outer joins.

    -you say that the suspicion of materialization brings you an erroneous result: the indicator object (online) gives you the correct results?

    Concerning

    Martin Preiss

  • Problem with analytical function for date

    Hi all

    ORCL worm:
    Oracle Database 11 g Enterprise Edition Release 11.2.0.2.0 - 64 bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    "CORE 11.2.0.2.0 Production."
    AMT for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production

    I have a problem with the analtical for the date function. I'm trying to group records based on timestamp, but I'm failing to do.
    Could you please help me find where I'm missing.
    This is the subquery. No issue with this. I'm just posting it for reference. 
    select sum(disclosed_cost_allocation.to_be_paid_amt) amt,
        substr(reference_data.ref_code,4,10) cd,
        to_char(external_order_status.status_updated_tmstp, 'DD-MON-YYYY HH24:MI:SS') tmstp,
        DISCLOSED_CLOSING_COST.DISCLOSED_CLOSING_COST_ID id
      FROM Deal.Fee_Mapping_Definition ,
        Deal.Fee_Index_Definition ,
        Deal.Fee_Closing_Cost_Item,
        Deal.Closing_Cost,
        Deal.Document_Generation_Request,
        deal.PRODUCT_REQUEST,
        deal.External_Order_Request,
        deal.External_Order_Status,
        deal. DISCLOSED_CLOSING_COST,
        deal.DISCLOSED_COST_ALLOCATION,
        deal.reference_data
      WHERE Fee_Mapping_Definition.Fee_Code                    = Fee_Index_Definition.Fee_Code
      AND Fee_Index_Definition.Fee_Index_Definition_Id         = Fee_Closing_Cost_Item.Fee_Index_Definition_Id
      AND Fee_Closing_Cost_Item.Closing_Cost_Id                = Closing_Cost.Closing_Cost_Id
      AND CLOSING_COST.PRODUCT_REQUEST_ID                      = Document_Generation_Request.Product_Request_Id
      AND closing_cost.product_request_id                      = product_request.product_request_id
      AND Product_Request.Deal_Id                              = External_Order_Request.Deal_Id
      AND external_order_request.external_order_request_id     = external_order_status.external_order_request_id
      AND external_order_request.external_order_request_id     = disclosed_closing_cost.external_order_request_id
      AND DISCLOSED_CLOSING_COST. DISCLOSED_CLOSING_COST_ID    = DISCLOSED_COST_ALLOCATION.DISCLOSED_CLOSING_COST_ID
      AND Fee_Index_Definition.Fee_Index_Definition_Id         = Disclosed_Closing_Cost.Fee_Index_Definition_Id
      AND Fee_Mapping_Definition.Document_Line_Series_Ref_Id   = Reference_Data.Reference_Data_Id
      AND Document_Generation_Request.Document_Package_Ref_Id IN (7392 ,2209 )
      AND External_Order_Status.Order_Status_Txt               = ('GenerationCompleted')
      AND Fee_Mapping_Definition.Document_Line_Series_Ref_Id  IN ( 7789, 7788,7596 )
      AND FEE_MAPPING_DEFINITION.DOCUMENT_TYPE_REF_ID          = 1099
      AND Document_Generation_Request.Product_Request_Id      IN
        (SELECT PRODUCT_REQUEST.PRODUCT_REQUEST_id
        FROM Deal.Disclosed_Cost_Allocation,
          Deal.Disclosed_Closing_Cost,
          DEAL.External_Order_Request,
          DEAL.PRODUCT_REQUEST,
          Deal.Scenario
        WHERE Disclosed_Cost_Allocation.Disclosed_Closing_Cost_Id = Disclosed_Closing_Cost.Disclosed_Closing_Cost_Id
        AND Disclosed_Closing_Cost.External_Order_Request_Id      = External_Order_Request.External_Order_Request_Id
        AND External_Order_Request.Deal_Id                        = Product_Request.Deal_Id
        AND product_request.scenario_id                           = scenario.scenario_id
        AND SCENARIO.SCENARIO_STATUS_TYPE_REF_ID                  = 7206
        AND product_request.servicing_loan_acct_num              IS NOT NULL
        AND product_request.servicing_loan_acct_num               = 0017498379
          --AND Disclosed_Cost_Allocation.Disclosed_Cost_Allocation_Id = 5095263
        )
      GROUP BY DISCLOSED_CLOSING_COST.DISCLOSED_CLOSING_COST_ID,
        External_Order_Status.Status_Updated_Tmstp,
        Reference_Data.Ref_Code,
        disclosed_cost_allocation.to_be_paid_amt
      order by 3 desc,
        1 DESC;
    
    Result:
    2000     1304-1399     28-JUL-2012 19:49:47     6880959
    312     1302     28-JUL-2012 19:49:47     6880958
    76     1303     28-JUL-2012 19:49:47     6880957
    2000     1304-1399     28-JUL-2012 18:02:16     6880539
    312     1302     28-JUL-2012 18:02:16     6880538
    76     1303     28-JUL-2012 18:02:16     6880537
    
    
    But, when I try to group the timestamp using analytical function,
    
    
    select amt 
            ,cd 
            ,rank() over(partition by tmstp order by tmstp desc) rn 
    from 
    (select sum(disclosed_cost_allocation.to_be_paid_amt) amt,
        substr(reference_data.ref_code,4,10) cd,
        to_char(external_order_status.status_updated_tmstp, 'DD-MON-YYYY HH24:MI:SS') tmstp,
        DISCLOSED_CLOSING_COST.DISCLOSED_CLOSING_COST_ID id
      FROM Deal.Fee_Mapping_Definition ,
        Deal.Fee_Index_Definition ,
        Deal.Fee_Closing_Cost_Item,
        Deal.Closing_Cost,
        Deal.Document_Generation_Request,
        deal.PRODUCT_REQUEST,
        deal.External_Order_Request,
        deal.External_Order_Status,
        deal. DISCLOSED_CLOSING_COST,
        deal.DISCLOSED_COST_ALLOCATION,
        deal.reference_data
      WHERE Fee_Mapping_Definition.Fee_Code                    = Fee_Index_Definition.Fee_Code
      AND Fee_Index_Definition.Fee_Index_Definition_Id         = Fee_Closing_Cost_Item.Fee_Index_Definition_Id
      AND Fee_Closing_Cost_Item.Closing_Cost_Id                = Closing_Cost.Closing_Cost_Id
      AND CLOSING_COST.PRODUCT_REQUEST_ID                      = Document_Generation_Request.Product_Request_Id
      AND closing_cost.product_request_id                      = product_request.product_request_id
      AND Product_Request.Deal_Id                              = External_Order_Request.Deal_Id
      AND external_order_request.external_order_request_id     = external_order_status.external_order_request_id
      AND external_order_request.external_order_request_id     = disclosed_closing_cost.external_order_request_id
      AND DISCLOSED_CLOSING_COST. DISCLOSED_CLOSING_COST_ID    = DISCLOSED_COST_ALLOCATION.DISCLOSED_CLOSING_COST_ID
      AND Fee_Index_Definition.Fee_Index_Definition_Id         = Disclosed_Closing_Cost.Fee_Index_Definition_Id
      AND Fee_Mapping_Definition.Document_Line_Series_Ref_Id   = Reference_Data.Reference_Data_Id
      AND Document_Generation_Request.Document_Package_Ref_Id IN (7392 ,2209 )
      AND External_Order_Status.Order_Status_Txt               = ('GenerationCompleted')
      AND Fee_Mapping_Definition.Document_Line_Series_Ref_Id  IN ( 7789, 7788,7596 )
      AND FEE_MAPPING_DEFINITION.DOCUMENT_TYPE_REF_ID          = 1099
      AND Document_Generation_Request.Product_Request_Id      IN
        (SELECT PRODUCT_REQUEST.PRODUCT_REQUEST_id
        FROM Deal.Disclosed_Cost_Allocation,
          Deal.Disclosed_Closing_Cost,
          DEAL.External_Order_Request,
          DEAL.PRODUCT_REQUEST,
          Deal.Scenario
        WHERE Disclosed_Cost_Allocation.Disclosed_Closing_Cost_Id = Disclosed_Closing_Cost.Disclosed_Closing_Cost_Id
        AND Disclosed_Closing_Cost.External_Order_Request_Id      = External_Order_Request.External_Order_Request_Id
        AND External_Order_Request.Deal_Id                        = Product_Request.Deal_Id
        AND product_request.scenario_id                           = scenario.scenario_id
        AND SCENARIO.SCENARIO_STATUS_TYPE_REF_ID                  = 7206
        AND product_request.servicing_loan_acct_num              IS NOT NULL
        AND product_request.servicing_loan_acct_num               = 0017498379
          --AND Disclosed_Cost_Allocation.Disclosed_Cost_Allocation_Id = 5095263
        )
      GROUP BY DISCLOSED_CLOSING_COST.DISCLOSED_CLOSING_COST_ID,
        External_Order_Status.Status_Updated_Tmstp,
        Reference_Data.Ref_Code,
        disclosed_cost_allocation.to_be_paid_amt
      order by 3 desc,
        1 DESC);
    
    Result:
    312     1302            1
    2000     1304-1399     1
    76     1303            1
    312     1302            1
    2000     1304-1399     1
    76     1303            1 
    
    
    Required output:
    312     1302            1
    2000     1304-1399     1
    76     1303            1
    312     1302            2
    2000     1304-1399     2
    76     1303            2
    THX
    Rod.

    Hey, Rod,

    My guess is that you want:

    , dense_rank () over (order by  tmstp  desc)  AS rn 
    

    RANK means you'll jump numbers when there is a link. For example, if all 3 rows have the exact same last tmstp, all 3 rows would be assigned number 1, GRADE would assign 4 to the next line, but DENSE_RANK attributes 2.

    "PARTITION x" means that you are looking for a separate series of numbers (starting with 1) for each value of x. If you want just a series of numbers for the entire result set, then do not use a PARTITION BY clause at all. (PARTITION BY is never required.)
    Maybe you want to PARTITIONNER IN cd. I can't do it without some examples of data, as well as an explanation of why you want the results of these data.
    You certainly don't want to PARTITION you BY the same expression ORDER BY; It simply means that all the lines are tied for #1.

    I hope that answers your question.
    If not, post a small example data (CREATE TABLE and only relevant columns, INSERT statements) for all of the tables involved and also publish outcomes from these data.
    Explain, using specific examples, how you get these results from these data.
    Simplify the problem as much as possible.
    Always tell what version of Oracle you are using.
    See the FAQ forum {message identifier: = 9360002}

    Published by: Frank Kulash, August 1, 2012 13:20

  • Ask in analytic function

    Hello
    I use under request

    Select * from
    (
    SELECT FLAG, S_DATE, ROW_NUMBER () OVER (PARTITION OF)
    order by S_DATE, FLAG flag) d
    Table_name FROM
    ORDER BY S_DATE
    );

    below the output which gives

    Flag | S_DATE | D
    Y     | 27/02/2012 05:33 |     1
    Y     | 27/02/2012 05:34 |     2
    Y     | 27/02/2012 05:34 |     3
    N     | 27/02/2012 05:34 |     1
    N     | 27/02/2012 05:34 |     2
    N     | 27/02/2012 05:34 |     3
    N     | 27/02/2012 05:35 |     4
    N     | 27/02/2012 05:35 |     5
    Y     |  27/02/2012 05:36 |     4
    Y     |  27/02/2012 05:36 |     5
    Y     |  27/02/2012 05:36 |     6


    But I want the output to be below order is changed in the last 3 rows

    Flag | S_DATE | D

    Y     | 27/02/2012 05:33 |     1
    Y     | 27/02/2012 05:34 |     2
    Y     | 27/02/2012 05:34 |     3
    N     | 27/02/2012 05:34 |     1
    N     | 27/02/2012 05:34 |     2
    N     | 27/02/2012 05:34 |     3
    N     | 27/02/2012 05:35 |     4
    N     | 27/02/2012 05:35 |     5
    Y     |  27/02/2012 05:36 |     1
    Y     |  27/02/2012 05:36 |     2
    Y     |  27/02/2012 05:36 |     3

    I used the analytical function.

    Published by: user8858890 on February 27, 2012 02:00

    Hello

    user8858890 wrote:
    ... But I want the output to be below order is changed in the last 3 rows

    Flag | S_DATE | D

    Y     | 27/02/2012 05:33 |     1
    Y     | 27/02/2012 05:34 |     2
    Y     | 27/02/2012 05:34 |     3
    N     | 27/02/2012 05:34 |     1
    N     | 27/02/2012 05:34 |     2
    N     | 27/02/2012 05:34 |     3
    N     | 27/02/2012 05:35 |     4
    N     | 27/02/2012 05:35 |     5
    Y     |  27/02/2012 05:36 |     1
    Y     |  27/02/2012 05:36 |     2
    Y     |  27/02/2012 05:36 |     3

    Why do you want the last 3 lines (who have the flag = 'Y') to be numbered 1, 2, 3, when the first 3 lines (who also have the flag = "Y") already have the numbers 1, 2 and 3? Do you want a separate whenevever #1 there is a group of consecutive lines (when ordered by s_date) who have the same flag? If so, you need to identify the groups, like this:

    WITH     got_grp_id     AS
    (
         SELECT     flag
         ,     s_date
         ,     ROWID               AS r_id
         ,     ROW_NUMBER () OVER ( ORDER BY      s_date
                                   ,                  ROWID
                           )
               - ROW_NUMBER () OVER ( PARTITION BY  flag
                                         ORDER BY          s_date
                             ,               ROWID
                           )    AS grp_id
         FROM    table_name
    )
    SELECT       flag
    ,       s_date
    ,       ROW_NUMBER () OVER ( PARTITION BY  flag
                                 ,          grp_id
                          ORDER BY          s_date
                          ,               r_id
                        )      AS d
    FROM      got_grp_id
    ORDER BY  s_date
    ,            grp_id
    ,       d
    ;
    

    This assumes that each line can be unique idendified, so that the order is unambiguous. In your sample data, there is completely identical lines, so I used the ROWID to uniquely identify the lines. Using ROWID suppose table_name is a real table, not just a game of results.

    I hope that answers your question.
    If not, post a small example of data (CREATE TABLE and only relevant columns, INSERT statements) for all of the tables involved and the results desired from these data.
    Explain, using specific examples, how you get these results from these data.
    Always tell what version of Oracle you are using.

  • Merge no SQL using analytical functions

    Hi, the Sql tuning specialists:
    I have a question about the merger of view inline.

    I have a simple vision with the analytical functions inside. When questioning him, he does not index.


    VIEW to CREATE or REPLACE ttt
    AS
    SELECT EmpNo, deptno,
    ROW_NUMBER() over (PARTITION BY deptno ORDER BY deptno desc NULLS last) part_seq
    EMP AAA

    -That will do full table for emp scan
    Select * from TT
    WHERE empno = 7369


    -If I do not view use, I use the query directly, the index is used
    SELECT EmpNo, deptno,
    ROW_NUMBER() over (PARTITION BY deptno ORDER BY deptno desc NULLS last) part_seq
    EMP aaa
    WHERE empno = 7369


    question is: How can I force the first query to use indexes?

    Thank you

    MScallion wrote:
    What happens if you use the push_pred flag:

    Nothing will happen. And it would be a bug if he would.

    select * from ttt
    WHERE empno=7369
    

    and

    SELECT empno,deptno,
    row_number() OVER (PARTITION BY deptno ORDER BY deptno desc NULLS last) part_seq
    FROM emp aaa
    WHERE empno=7369
    

    are two logically different queries. Analytical functions are applied after + * resultset is common. So first select query all rows in the emp table then assign ROW_NUMBER() to recovered lines and only then select a line with empno = 7369 her. Second query will select the table emp with empno = 7369 line and only then apply ROW_NUMBER() - so since emp.empno is unique ROW_NUMBER returned by second query will always be equal to 1:

    SQL> select * from ttt
      2  WHERE empno=7369
      3  /
    
         EMPNO     DEPTNO   PART_SEQ
    ---------- ---------- ----------
          7369         20          4
    
    SQL> SELECT empno,deptno,
      2  row_number() OVER (PARTITION BY deptno ORDER BY deptno desc NULLS last) part_seq
      3  FROM emp aaa
      4  WHERE empno=7369
      5  /
    
         EMPNO     DEPTNO   PART_SEQ
    ---------- ---------- ----------
          7369         20          1
    
    SQL> 
    

    SY.

  • Using the analytical function.

    Hello

    I have this scenario.
    with t as 
    (
    select 21009 item_id,9 primary_available,450 max_qty,100 bulk_available,12122 bulk_locator_id 
    from dual union all
    select 21009 item_id,9 primary_available,450 max_qty,2775 bulk_available,8704 bulk_locator_id 
    from dual union all
    select 21009 item_id,9 primary_available,450 max_qty,524 bulk_available,15614 bulk_locator_id 
    from dual union all
    select 21009 item_id,9 primary_available,450 max_qty,3300 bulk_available,15654 bulk_locator_id
    from dual)
    select  t.* from t;
    We have two locations for a given item_id. Primary and in bulk.

    I'm trying to get a select statement out of this point of view, where I will be restock the primary AMOUNT of sites in bulk, BUT the smaller bulk first. Once she gets up, I shouldn't take more product.

    There is an analytic function that would do this?

    That's the max I could come up with.
    with t as 
    (
    select 21009 item_id,9 primary_available,450 max_qty,100 bulk_available,12122 bulk_locator_id 
    from dual union all
    select 21009 item_id,9 primary_available,450 max_qty,2775 bulk_available,8704 bulk_locator_id 
    from dual union all
    select 21009 item_id,9 primary_available,450 max_qty,524 bulk_available,15614 bulk_locator_id 
    from dual union all
    select 21009 item_id,9 primary_available,450 max_qty,3300 bulk_available,15654 bulk_locator_id
    from dual)
    select  t.*, max_qty -
                   (primary_available + SUM(bulk_available)
                    over(PARTITION BY item_id ORDER BY bulk_available)) replen_this_much 
                    from t;
    So, in this scenario, I want to replen bulk_locator_id 100 ' 12122 'and ' 15614 bulk_locator_id 341'. That's all. ZERO of the other rentals (bulk_locator_id). If the question is not clear, please let me know.

    Published by: RPuttagunta on September 11, 2009 16:23

    Hello

    Thanks for posting the sample data.
    It would be useful that you also posted the output you want. Is this?

    .                                 BULK_         REPLEN_
    ITEM_ PRIMARY_   MAX_  BULK_      LOCATOR_        THIS_
    ID    AVAILABLE  QTY   AVAILABLE  ID               MUCH
    ----- ---------- ----- ---------- ---------- ----------
    21009 9          450   100        12122             100
    21009 9          450   524        15614             341
    21009 9          450   2775       8704                0
    21009 9          450   3300       15654               0
    

    If so, you can get to this:

    SELECT       t.*
    ,       GREATEST ( 0
                 , LEAST ( TO_NUMBER (bulk_available)
                         , TO_NUMBER (max_qty)
                        - ( TO_NUMBER (primary_available)
                          + NVL ( SUM (TO_NUMBER (bulk_available))
                                  OVER ( PARTITION BY  item_id
                                         ORDER BY      TO_NUMBER (bulk_available)
                                   ROWS BETWEEN  UNBOUNDED PRECEDING
                                     AND      1           PRECEDING
                                 )
                             , 0
                             )
                          )
                      )
                 ) AS replen_this_much
    FROM       t
    ORDER BY  item_id
    ,            TO_NUMBER (bulk_available)
    ;
    

    You should really store your numbers in NUMBER of columns.

    You essentially posted all what you need analytical functions. The problem was just wrapping this analytical function (or something very close to it) and LESS and more GRAND, so that the replen_this_much column is always between 0 and TO_NUMBER (bulk_available).

  • Analytic function to summarize

    Hi, I have a question regarding the analytical functions.

    SCENARIO OF
    I have two fields: TICKET_ID and TASK_TIME

    For each TICKET_ID, I have a task_time calculated as in the example:
     
    TICKET_ID                            TASK_TIME
    112293                                 25                                   
    112294                                 1200                                 
    112294                                 40                                   
    112295                                 40                                   
    112296                                 60                                   
    112297                                 120                                  
    112298                                 60                                   
    112299                                 60                                   
    112300                                 180                                  
    112301                                 1440                                 
    112302                                 4320                                 
    112303                                 120                                  
    112304                                 306                                  
    TASK_TIME calculation is the following: IPTV_COUNT_TASKS_TIME (JTF_TASKS_B.Task Id: CREATION_DATE_UL)

    where:

    IPTV_COUNT_TASKS_TIME is a function of DB
    JTF_TASKS_B.Task Id is an area from runtime
    : CREATION_DATE_UL is a parameter after PERIOD


    As you can see the 112294 ticket I have 2 rows.
    What I want to achieve is the following: I would like to have average task_time (taking into account all tickets), but I don't want to count 2 times the ticket 112294

    So... What have I done?
    I create another calculated field named SUM (Task_time) where the calculation is: SUM (TASK_TIME) OVER (PARTITION OF TICKET_ID)
    and I got:
     
    TICKET_ID                            TASK_TIME
    112293                                 25                                   
    112294                                 1240  
    112294                                 1240                               
    112295                                 40                                   
    112296                                 60                                   
    112297                                 120                                  
    112298                                 60                                   
    112299                                 60                                   
    112300                                 180                                  
    112301                                 1440                                 
    112302                                 4320                                 
    112303                                 120                                  
    112304                                 306                                  
    .. .and it isn't good, because, in order to calculate the AVG (TASK_TIME) I want to have only one ticket 112294 (and see only once the value of 1240)
    In other words, I need to summarize the task_time for each ticket and show the result only once for each ticket.

    Where is the error? How I do change the SUM (Task_time) calculation?

    Thanks in advance for any suggestion

    Alex

    Hello

    You can create the package in a schema, although usually an APPS environment I would create the package in the APPS schema because then you need not to grant privileges.

    Rod West

  • Ask questions about the functionality of EA8500 MU-MIMO

    Dear Linksys

    I bought EA8500, and it arrived today.

    I have two questions about the functionality of MU-MIMO of EA8500 AP.

    1. I want to compare the performance between SU-MIMO and MU-MIMO, but there is no option to control this feature in the router admin page. Is that one of the possible ways?

    2 EA8500 supports 1733 Mbps wireless speed, but it resembles the speed of 1 Gbps ethernet cable.

    This router supports really speed 1 Gbps ethernet? or y at - it an option to support more throughput as the aggregation of links?

    Thank you

    Hi, hyeonu. For your first query, it is not possible because there is no option to disable the feature of MU-MIMO on your Linksys EA8500 router. In addition, with regard to your second, you can get a higher throughput of 1 Gbps since this is the maximum capacity of your ethernet connection.

  • Cannot use analytical functions such as lag/lead in odi components 12 c except in the expression

    Hi I am a beginner of ODI 12 c

    I'm trying to get the last two comments made on the product for a given product id. and load them into a target.

    I have a source table something like

    Product SR_NO comments LAST_UPDATED_TS

    1 good car 2015/05/15 08:30:25

    1 car average 2015/05/15 10:30:25

    Jeep 2 super 2015/05/15 11:30:25

    1 car bad 2015/05/15 11:30:25

    Jeep 2 horrible 2015/05/15 09:30:25

    Jeep 2 excellent 2015/05/15 12:30:25


    I want a target table based on their last timestamp updated as (last two comments)


    SR_NO Comment1 Comment2

    1                             bad                      average

    2 super excellent

    I used the logic below to get records in SQL Developer but in ODI 12 c, I'm not able to do this by mapping a source to the target table by applying analytical functions to the columns in the target table. Can someone help me solve this problem

    SELECT * FROM)

    SELECT SR_NO Comment1, LAG(Comment1,1,) ON Comment2 (SR_NO ORDER BY LAST_UPDATED_TS ASC PARTITION),

    ROW_NUMBER() ON RN (SCORE FROM SR_NO ORDER BY LAST_UPDATED_TS DESC)

    FROM Source_table

    ) M

    WHERE RN = 1

    ;

    UM, I'm afraid that ODI puts the filter too early in the request, if it generates:

    SELECT * FROM)

    SELECT SR_NO Comment1, LAG(Comment1,1,) ON Comment2 (SR_NO ORDER BY LAST_UPDATED_TS ASC PARTITION),

    ROW_NUMBER() ON RN (SCORE FROM SR_NO ORDER BY LAST_UPDATED_TS DESC)

    FROM Source_table

    WHERE RN = 1

    ) M

    ;

    Instead of:

    SELECT * FROM)

    SELECT SR_NO Comment1, LAG(Comment1,1,) ON Comment2 (SR_NO ORDER BY LAST_UPDATED_TS ASC PARTITION),

    ROW_NUMBER() ON RN (SCORE FROM SR_NO ORDER BY LAST_UPDATED_TS DESC)

    FROM Source_table

    ) M

    WHERE RN = 1

    ;

    Even by changing the 'run on Hint"of your component of the expression to get there on the source, the request will stay the same.

    I think the easiest solution for you is to put everything before the filter in a reusable mapping with a signature of output. Then drag this reusable in your mapping as the new source and check the box "subselect enabled."

    Your final mapping should look like this:

    It will be useful.

    Kind regards

    JeromeFr

  • Which analytical function to use?


    Hi gurus,

    DB - Oracle 11 g 2

    I followed the examples of data in the table test_a.

    col1 col2 col3

    -----            -------            --------

    x                 y                   y

    p                 q                  y

    a                b                   y

    p                q                   y

    t                 r                    y

    p                q                   y

    The col3 column is always 'y '. But here's the data p, q, there is repeated 3 times (duplicate) and if this is the case I want to update only the first recordings like "n" col3 it is to say p, q, n. rest will be as it is.

    I am able to get the row_number() for it but not able to do this.

    Select col1, clo2, clo3 row_number() over (partition by col2) arrested by col1 as test_a r_num

    Would it be possible directly by any analytic function?

    Thank you

    SID

    COL4 is logical...

    Something like that?

    with x as)

    Select col1, col2 ' x' 'y', 'y' col3 col4 1 Union double all the

    Select 'p' col1, col2 'q', 'y' col3 col4 2 Union double all the

    Select 'a' col1, col2 'b', 'y' col3 col4 3 of all the double union

    Select 'p' col1, col2 'q', 'y' col3 col4 4 Union double all the

    Select 't' col1, col2 'r', 'y' col3, col4 5 Union double all the

    Select 'p' col1, col2 'q', 'y' col3 col4 6 double

    )

    ---

    Select * from)

    Select x.*,

    ROW_NUMBER() on rn1 (score of col1, col2, col3 col4 sort),

    ROW_NUMBER() on rn2 (partition by col1, col2, col3 col4 desc sorting)

    x

    )

    where rn1 = 1 and rn2 <> 1;

    Understand the logic and simply change SELECT a query to UPDATE...

  • Using the analytic function

    Oracle 11g Release 2

    I'm assuming that the best solution is the use of analytical functions.

    create table test3
    ( part_type_id  varchar2(50)
    ,group_id      number
    ,part_desc_id  number
    ,part_cmt      varchar2(50)
    )
    /
    
    insert into test3 values( 'ABC123',1,10,'comment1');
    insert into test3 values( 'ABC123',1,10,'comment2');
    insert into test3 values( 'ABC123',2,15,'comment1');
    insert into test3 values( 'ABC123',2,15,'comment2');
    insert into test3 values( 'EFG123',25,75,'comment3');
    insert into test3 values( 'EFG123',25,75,'comment4');
    insert into test3 values( 'EFG123',25,75,'comment5');
    insert into test3 values( 'XYZ123',1,10,'comment6');
    insert into test3 values( 'XYZ123',2,15,'comment7');
    commit;
    
    select * from test3;
    
    PART_TYPE_ID           GROUP_ID PART_DESC_ID PART_CMT
    -------------------- ---------- ------------ --------------------
    ABC123                        1           10 comment1
    ABC123                        1           10 comment2
    ABC123                        2           15 comment1
    ABC123                        2           15 comment2
    EDG123                        25          75 comment3
    EDG123                        25          75 comment4
    EDG123                        25          75 comment5
    XYZ123                        1           10 comment6
    XYZ123                        2           15 comment7
    
    9 rows selected.
    
    Desired output:
    
    PART_TYPE_ID           GROUP_ID PART_DESC_ID PART_CMT
    -------------------- ---------- ------------ --------------------
    ABC123                        1           10 comment1 
    ABC123                        2           15 comment1
    XYZ123                        1           10 comment1
    XYZ123                        2           15 comment2
    
    RULE: where one part_type_id has multiple (2 or more distinct combinations) of group_id/part_desc_id
    
    NOTE: There are about 12 columns in the table, for brevity I only included 4.
    
    
    
    

    Post edited by: orclrunner was updated desired output and rule

    Hello

    Here's one way:

    WITH got_d_count AS

    (

    SELECT part_type_id, group_id, part_desc_id

    MIN (part_cmt) AS min_part_cmt

    COUNT AS d_count (*) OVER (PARTITION BY part_type_id)

    OF test3

    GROUP BY part_type_id, group_id, part_desc_id

    )

    SELECT DISTINCT

    group_id, part_desc_id, part_type_id, min_part_cmt

    OF got_d_count

    WHERE d_count > 1

    ;

    Output:

    GROUP_ID PART_DESC_ID MIN_PART_CMT PART_TYPE_ID

    ------------ ---------- ------------ ------------

    ABC123 1 10 comment1

    ABC123 2 15 comment1

    XYZ123 1 10 comment6

    XYZ123 2 15 comment7

    Analytical functions, such as the COUNTY and MIN, many global versions, in addition, it can give the same results.  Use the analytical versions when each row of output corresponds to exactly 1 row of input and the aggregate and GROUP BY version when each line of output corresponds to a group of lines 1 or more input.  In this issue, each line of output appears to be a group of input lines having the same group_id, part_type_id, and part_desc_id (I'm guessing just, this only has never stated), so I used GROUP BY to get 1 row of output for every input lines.

  • Truncate output of analytical function?

    For example this query:

    Select month, sum (tot_sales) monthly_sales,.

    AVG (Sum (tot_sales)) (any order by month

    between 1 above and 1 below) rolling_avg

    orders

    where year = 2001 and region_id = 6

    Group by month;

    gives me an output which includes several decimal places for the rolling_avg column.

    Is there a way to truncate this? I tried to use the rounded outside the analytical function and surely enough, it didn't work. I can't think otherwise.

    You can use an external selection on the result of this query

    select trunc(rolling_avg) from
    ( rolling_avg query);
    

Maybe you are looking for