confusion with the analytical functions

I created an example where I am right now with the help of analytical functions. However, I need the query below to return an additional column. I need to return the result from:-' factor_day_sales * max (sdus)'. Any ideas?

If the first column is located and must have the following results

777777, 5791, 10, 1.5, 15, 90, 135, 7050

the 1350 is the result, I don't know how to do. (some how to multiply factored_day_sales max (sdus) 15 470 = 7050
create table david_sales (
pro_id number(38),
salesidx number (38,6),
tim_id number(38));

truncate table david_sales

create table david_compensations (
pro_id number(38),
tim_id number(38),
factor number(38,6));


insert into david_sales values
(777777, 10.00, 5795);
insert into david_sales values
(777777,20.00, 5795);
insert into david_sales values
(777777, 30.00, 5794);
insert into david_sales values
(777777, 40.00, 5794);
insert into david_sales values
(777777, 100.00, 5793);
insert into david_sales values
(777777, 10.00, 5793);
insert into david_sales values
(777777,80.00, 5791);
insert into david_sales values
(777777, 10.00, 5791);

insert into david_compensations values
(777777, 5795, 1.5);
insert into david_compensations values
(777777, 5793, 2.0);
insert into david_compensations values
(777777, 5792, 1.0);
insert into david_compensations values
(777777, 5791, 1.5);



    SELECT  s.pro_id sales_pro
    ,       c.pro_id comp_pro
    ,       s.tim_id sales_tim
    ,       c.tim_id comp_tim
    ,       s.salesidx day_sales
    ,       NVL(c.factor, 1) factor
    ,       s.salesidx * NVL(c.factor, 1) factored_day_sales
    ,       sum(s.salesidx                   ) over (partition by s.pro_id order by s.pro_id, s.tim_id) Sdus
    ,       sum(s.salesidx * NVL(c.factor, 1)) over (partition by s.pro_id order by s.pro_id, s.tim_id) sumMjCj 
      FROM david_sales s
      ,    david_compensations c
      WHERE s.pro_id    = c.pro_id(+)
      AND s.tim_id      = c.tim_id(+)
      AND s.tim_id     BETWEEN 5791  AND 5795
Thanks for looking

Is that what you want?

    SELECT  s.pro_id sales_pro
    ,       c.pro_id comp_pro
    ,       s.tim_id sales_tim
    ,       c.tim_id comp_tim
    ,       s.salesidx day_sales
    ,       NVL(c.factor, 1) factor
    ,       s.salesidx * NVL(c.factor, 1) factored_day_sales
    ,       sum(s.salesidx                   ) over (partition by s.pro_id order by s.pro_id, s.tim_id) Sdus
    ,       sum(s.salesidx * NVL(c.factor, 1)) over (partition by s.pro_id order by s.pro_id, s.tim_id) sumMjCj
    , (s.salesidx * NVL(c.factor, 1) * sum(s.salesidx                   ) over (partition by s.pro_id order by s.pro_id, s.tim_id))
      FROM david_sales s
      ,    david_compensations c
      WHERE s.pro_id    = c.pro_id(+)
      AND s.tim_id      = c.tim_id(+)
      AND s.tim_id     BETWEEN 5791  AND 5795

SALES_PRO              COMP_PRO               SALES_TIM              COMP_TIM               DAY_SALES              FACTOR                 FACTORED_DAY_SALES     SDUS                   SUMMJCJ                SUMMEDMULTI
---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ----------------------
777777                 777777                 5791                   5791                   80                     1.5                    120                    90                     135                    10800
777777                 777777                 5791                   5791                   10                     1.5                    15                     90                     135                    1350  

I get the 1350

or did you mean:

    SELECT  s.pro_id sales_pro
    ,       c.pro_id comp_pro
    ,       s.tim_id sales_tim
    ,       c.tim_id comp_tim
    ,       s.salesidx day_sales
    ,       NVL(c.factor, 1) factor
    ,       s.salesidx * NVL(c.factor, 1) factored_day_sales
    ,       sum(s.salesidx                   ) over (partition by s.pro_id order by s.pro_id, s.tim_id) Sdus
    ,       sum(s.salesidx * NVL(c.factor, 1)) over (partition by s.pro_id order by s.pro_id, s.tim_id) sumMjCj
    ,  s.salesidx * NVL(c.factor, 1) * (sum(s.salesidx * NVL(c.factor, 1)) over (partition by s.pro_id order by s.pro_id, s.tim_id)) summedMulti
      FROM david_sales s
      ,    david_compensations c
      WHERE s.pro_id    = c.pro_id(+)
      AND s.tim_id      = c.tim_id(+)
      AND s.tim_id     BETWEEN 5791  AND 5795 

SALES_PRO              COMP_PRO               SALES_TIM              COMP_TIM               DAY_SALES              FACTOR                 FACTORED_DAY_SALES     SDUS                   SUMMJCJ                SUMMEDMULTI
777777                 777777                 5795                   5795                   10                     1.5                    15                     300                    470                    7050

Note, in the second block, I changed it just to use sumMjCj instead of sDus which seems to correlate with what you wanted (15 * 470 = 7050) while sdus is 15 * 300 = 4500

Published by: tanging on December 11, 2009 06:17

Tags: Database

Similar Questions

  • Need help with the analytic function

    I want to get the highest employee details and the 2nd highest employee for a particular service. But also the Department should have more than 1 employee.
    I tried the query and it gave me the correct results. But I wonder if there is another solution than to use the subquery.

    Here is the table and the query result:
    with t as
    (
    select 1 emp_id,3 mgr_id,'Rajesh' emp_name,3999 salary,677 bonus,'HR' dpt_nme from dual union
    select 2 ,3 ,'Gangz',4500,800,'Finance' from dual  union
    select 3 ,4 ,'Sid',8000,12000,'IT' from dual  union
    select 4 ,null,'Ram',5000,677,'HR' from dual  union
    select 5 ,4,'Shyam',6000,677,'IT' from dual union
    select 6 ,4 ,'Ravi',9000,12000,'IT' from dual   
    )
    select * from 
    (select emp_id, mgr_id, emp_name, dpt_nme, salary, row_number() over (partition by dpt_nme order by salary desc) rn from t where dpt_nme in 
    (select dpt_nme from t group by dpt_nme having count(*) > 1)) where rn < 3

    Hello

    You need a subquery, but you don't need more than that.
    Here's a way to eliminate the additional subquery:

    WITH     got_analytics     AS
    (
         SELECT  emp_id,     mgr_id,     emp_name, dpt_nme, salary
         ,     ROW_NUMBER () OVER ( PARTITION BY  dpt_nme
                                   ORDER BY          salary     DESC
                           )         AS rn
         ,     COUNT (*)     OVER ( PARTITION BY  dpt_nme
                                       )         AS dpt_cnt
         FROM     t
    )
    SELECT  emp_id,     mgr_id,     emp_name, dpt_nme, salary
    ,     rn
    FROM     got_analytics
    WHERE     rn     < 3
    AND     dpt_cnt     > 1
    ;
    

    Analytical functions are calculated after the clause WHERE is applied. Since we need to use the results of the analytical ROW_NUMBER function in a WHERE clause, which means that we have to calculate ROW_NUMBER in a subquery and use the results in the WHERE clause of the main query. We can call the COUNT function analytical in the same auxiliary request and use the results in the same WHERE clause of the main query.

    Would what results you if there is a link for the 2nd highest salary in some Department? For example, if you add this line to your sample data:

    select 7 ,3 ,'Sunil',8000,12000,'IT' from dual  union
    

    ? You can use RANK rather than ROW_NUMBER.

  • SQL question, perhaps with the analytical functions?

    I have a small problem:

    I have a table with:

    DAY_ID, PAGE_ORDER, SID, TIME, CONTENT.

    I want only to the rank (min) of lines with the same content when there is more than

    the one with the same content that follows.

    The data are:

    DAY PAGE_ORDER SID TIMES CONTENT

    20150825 1 4711 25.08.15 06:38:43 / body/home

    4711 2 20150825 25.08.15 06:39:10 home, aufmacher, Home, 42303938

    20150825 3 4711 25.08.15 06:39:15 welcome, aufmacher, Home, 42303938

    20150825 4 4711 25.08.15 06:39:20 home, aufmacher, Home, 42303938

    20150825 5 4711 25.08.15 06:39:24 home, aufmacher, Home, 42303938

    20150825 6 4711 25.08.15 06:39:32 home, aufmacher, Home, 42303938

    20150825 7 4711 25.08.15 06:39:39 home/aufmacher/Home/42303938

    20150825 8 4711 25.08.15 06:39:46 welcome, aufmacher, Home, 42303938

    20150825 9 4711 25.08.15 06:39:49 home, aufmacher, Home, 42303938

    4711 10 20150825 25.08.15 06:39:51 home, aufmacher, Home, 42303938

    4711 11 20150825 25.08.15 06:41:17 pol/art/2015/08/24/paris

    20150825 12 4711 25.08.15 06:42:36 / body/home

    20150825 13 4711 25.08.15 07:06:09 / body/home

    20150825 14 4711 25.08.15 07:06:36 reg/article/memo

    I want as a result:

    20150825 1 4711 25.08.15 06:38:43 / body/home

    4711 2 20150825 25.08.15 06:39:10 home, aufmacher, Home, 42303938

    4711 11 20150825 25.08.15 06:41:17 pol/art/2015/08/24/paris

    20150825 12 4711 25.08.15 06:42:36 / body/home

    20150825 14 4711 25.08.15 07:06:36 reg/article/memo

    Who knows a good way?

    Thank you very much

    It sounds like a simple solution group. You group by content and maybe a few other columns such as the day and sid. So, you want to show some value of inside this group. Several different aggregate functions to do this.

    Not tested, because of lack of tabe create and insert scripts

    select day, sid, content
            ,min(page_order) as page_order
            ,min(times) as times -- if the first page_order also has the first time
            ,min(times) keep dense_rank first (order by page_order) as times2 -- this is needed in case the first page_order is at a later time
    from yourTable
    group by day, sid, content
    

    If Solomon is right, and several identical content may exist (the example data show that). Then we can use the Tabibitosan method to create the groups.

    with step1 as (select t1.*, row_number() over (partition by day, sid, content order by page_order) rn
                         from yourTable
                         )
    select  day, sid, content
             , page_order - rn as group_number
             , min(page_order) as page_order
             , min(times) as times -- if the first page_order also has the first time
             , min(times) keep dense_rank first (order by page_order) as times2 -- this is needed in case the first page_order is at a later time
    from step1
    group by day, sid, content, page_order - rn
    order by day, sid, content, group_number;
    
  • Confusion with the Log function

    Hi all

    I am facing a problem using the logarithmic function on oracle XE 11 g.

    We all know that log (8), 2 is 3. Now obviously 3 should be together number here not a fraction (or I know wrong?).

    However, I do this:

    SELECT LOG(2,8) FROM DUAL;
    
         -- Result is 3
    
       -- but 
         SELECT floor(LOG(2,8)) FROM DUAL;  
    
        -- Result is 2 !!!!!!!!
    
      --   However 
         SELECT FLOOR(3) FROM DUAL;
    
         -- Result 3 as expected
    

    DB Version is 11g XE.

    Thank you,

    Ru

    1 * SELECT to_char (LOG (2.8)) FROM DUAL

    SQL > /.

    TO_CHAR (LOG (2.8))

    ----------------------------------------

    2.99999999999999999999999999999999999999

    SQL >

  • Need help with the analytical function select maximum and minimum of the results of the column

    Hey there OTN.

    I have an interesting application that I was hoping you would be able to help me with. I have a requirement to conditionally select the max and min of a column in bi-editor and since my editor works from an OBIEE analysis, I need store MAX and MIN of the column values in separate columns to match with. See the example below. You will notice that there are 4 stores including today's sales. I must have OBIEE through all the results of the column for sales, then choose the max of the dataset object. I can't use MAX here because he will choose the MAX of the line which will return only sales of this line. Instead, one must analyze all sales results and choose the appropriate column. Any idea on how to do this in OBIEE/publisher? Or is this not possible.

    Day Store Sales Sales of MAX Sales MIN
    05/11/15Store 1500080001000
    05/11/15Store 2750080001000
    05/11/15Store 3100080001000
    05/11/15Store 4800080001000

    I'm waiting for your answers. Thanks in advance!

    PS: I will always mark messages that are useful and eventually mark it as correct answer if we come to a resolution!

    See you soon.

    You can't do the same thing with RANK ("dirty")?

    Rank ("dirty") = 1: the max value in the result of sales

    RANK (-1 * "Sales") = 1: the min in the result of sales value

    I guess you can and then format the cells based on these values, where a value of 1 is the max or min according to the RANKING formula you used...

  • Please help improve the query with the analytic function

    The mentioned below query takes about 10 hours to complete (10.2.0.4).

    There are 3 tables (table t has a relationship 1: n with table e and k table also has a relationship 1: n with table e).
    Table a contains 200,000 lines. (this table is truncated and inserted several times a week)
    E table contains rows of 1Mio.
    K table contains rows of 170Mio.

    drop table t;
    create table t
    (
       t_id number,
       constraint t_pk primary key (t_id)
    );
    
    drop table e;
    create table e
    (
       e_id number,
       e_doc nvarchar2(16),
       e_date date,
       constraint e_pk primary key (e_id)
    );
    
    drop table k;
    create table k (
       t_id number,
       e_id number
    );
    
    create unique index k_i1 on k(t_id, e_id);
    
    exec dbms_stats.gather_table_stats(user, 'T');
    exec dbms_stats.gather_table_stats(user, 'K');
    exec dbms_stats.gather_table_stats(user, 'E');
    
    
    
    -- Sample data:
    
    insert into t(t_id) values (100);
    insert into t(t_id) values (101);
    insert into t(t_id) values (102);
    insert into t(t_id) values (103);
    
    
    insert into e(e_id, e_doc, e_date) values (200, 'doc 200', to_date('01.01.2010', 'DD.MM.YYYY'));
    insert into e(e_id, e_doc, e_date) values (201, 'doc 201', to_date('02.01.2010', 'DD.MM.YYYY'));
    insert into e(e_id, e_doc, e_date) values (202, 'doc 202', to_date('03.01.2010', 'DD.MM.YYYY'));
    insert into e(e_id, e_doc, e_date) values (203, 'doc 203', to_date('04.01.2010', 'DD.MM.YYYY'));
    insert into e(e_id, e_doc, e_date) values (204, 'doc 204', to_date('05.01.2010', 'DD.MM.YYYY'));
    insert into e(e_id, e_doc, e_date) values (205, 'doc 205', to_date('06.01.2010', 'DD.MM.YYYY'));
    insert into e(e_id, e_doc, e_date) values (206, 'doc 206', to_date('07.01.2010', 'DD.MM.YYYY'));
    insert into e(e_id, e_doc, e_date) values (207, 'doc 207', to_date('08.01.2010', 'DD.MM.YYYY'));
    
    insert into k(t_id, e_id) values (100, 200);
    insert into k(t_id, e_id) values (100, 201);
    insert into k(t_id, e_id) values (100, 202);
    insert into k(t_id, e_id) values (100, 203);
    
    insert into k(t_id, e_id) values (101, 203);
    insert into k(t_id, e_id) values (101, 204);
    
    
    
    
    
    select k.t_id, e.e_date,  e.e_id, e.e_doc
    from   e, k, t
    where  k.e_id = e.e_id
    and    k.t_id = t.t_id
    order by k.t_id, e.e_date desc;
    
    
          T_ID E_DATE         E_ID E_DOC
    ---------- -------- ---------- ----------------
           100 04.01.10        203 doc 203
           100 03.01.10        202 doc 202
           100 02.01.10        201 doc 201
           100 01.01.10        200 doc 200
           101 05.01.10        204 doc 204
           101 04.01.10        203 doc 203
    I need a query that takes the latest 3 posts for a given t_id:
          T_ID E_DOC_LIST
    ---------- -----------------------
           100 doc 200/doc 201/doc 202
           101 doc 203/doc 204
    
    
    Sample query:
    
    select t_id, e_doc_list
       from (
       select  k.t_id,
            row_number() over(partition by k.t_id order by k.t_id, e.e_date desc) r_num, 
            rtrim(       lag(e.e_doc, 0) over(partition by k.t_id order by k.t_id, e.e_date) || 
                  '/' || lag(e.e_doc, 1) over(partition by k.t_id order by k.t_id, e.e_date) || 
                  '/' || lag(e.e_doc, 2) over(partition by k.t_id order by k.t_id, e.e_date), 
                  '/') e_doc_list
         from  e,
               k,
               t
         where  k.e_id = e.e_id
         and    k.t_id = t.t_id
         order by k.t_id, e.e_date desc
    ) where  r_num = 1   ;
    
    
          T_ID E_DOC_LIST
    ---------- --------------------------------------------------
           100 doc 203/doc 202/doc 201
           101 doc 204/doc 203
    The example query takes several hours in production.
    The r_num = 1 filter is applied quite late. Is there another way to generate the query or even review the tables.
    For the sample query:
    
    -----------------------------------------------------------------------------------------
    | Id  | Operation                        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    -----------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT                 |      |     6 |   468 |     6  (50)| 00:00:01 |
    |*  1 |  VIEW                            |      |     6 |   468 |     6  (50)| 00:00:01 |
    |*  2 |   WINDOW SORT PUSHED RANK        |      |     6 |   216 |     6  (50)| 00:00:01 |
    |   3 |    WINDOW SORT                   |      |     6 |   216 |     6  (50)| 00:00:01 |
    |   4 |     NESTED LOOPS                 |      |     6 |   216 |     4  (25)| 00:00:01 |
    |   5 |      MERGE JOIN                  |      |     6 |   198 |     4  (25)| 00:00:01 |
    |   6 |       TABLE ACCESS BY INDEX ROWID| E    |     8 |   208 |     2   (0)| 00:00:01 |
    |   7 |        INDEX FULL SCAN           | E_PK |     8 |       |     1   (0)| 00:00:01 |
    |*  8 |       SORT JOIN                  |      |     6 |    42 |     2  (50)| 00:00:01 |
    |   9 |        INDEX FULL SCAN           | K_I1 |     6 |    42 |     1   (0)| 00:00:01 |
    |* 10 |      INDEX UNIQUE SCAN           | T_PK |     1 |     3 |     0   (0)| 00:00:01 |
    -----------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       1 - filter("R_NUM"=1)
       2 - filter(ROW_NUMBER() OVER ( PARTITION BY "K"."T_ID" ORDER BY
                  "K"."T_ID",INTERNAL_FUNCTION("E"."E_DATE") DESC )<=1)
       8 - access("K"."E_ID"="E"."E_ID")
           filter("K"."E_ID"="E"."E_ID")
      10 - access("K"."T_ID"="T"."T_ID")
    
    
    and for query in production
    
    ---------------------------------------------------------------------------------------
    | Id  | Operation                 | Name         | Rows  | Bytes |TempSpc| Cost (%CPU)|
    ---------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT          |              |  3118K|   425M|       |   160K  (1)|
    |   1 |  VIEW                     |              |  3118K|   425M|       |   160K  (1)|
    |   2 |   SORT ORDER BY           |              |  3118K|   163M|   383M|   160K  (1)|
    |   3 |    WINDOW SORT PUSHED RANK|              |  3118K|   163M|   383M|   160K  (1)|
    |   4 |     WINDOW SORT           |              |  3118K|   163M|   383M|   160K  (1)|
    |   5 |      HASH JOIN            |              |  3118K|   163M|    40M| 33991   (1)|
    |   6 |       TABLE ACCESS FULL   | E            |  1053K|    28M|       |  4244   (1)|
    |   7 |       NESTED LOOPS        |              |  3118K|    80M|       | 21918   (1)|
    |   8 |        TABLE ACCESS FULL  | T            |   144K|  1829K|       |   282   (2)|
    |   9 |        INDEX RANGE SCAN   | K_I1         |    22 |   308 |       |     1   (0)|
    ---------------------------------------------------------------------------------------
    
     

    TimWong765 wrote:
    ...
    Table a contains 200,000 lines. (* this table is truncated and inserted several times a week *)

    You could be in one of the rare cases where the index should be rebuild, take a look in the following thread:
    http://asktom.Oracle.com/pls/asktom/f?p=100:11:0:P11_QUESTION_ID:6601312252730 #69571308712887 (search for 'index of Sweeper')
    Make sure that you have checked if you are in this case before going for an expensive index rebuild.

    Nicolas.

  • Facing the question with the analytic function

    Hello

    I do a partition by Id and categorize data by date.
    But the data are not get ordered by date.


    Data:
    TID   Fid   TDate
    
    11     100  19/01/2009
    11     102  12/01/2009
    11     103  13/03/2009
    18     556  01/02/2009      
    16     400  05/03/2009
    16     401  06/04/2009
    17     300  05/02/2009
    
    count(*) over (partition by tid order by tdate)
    Please suggest.

    Published by: user545846 on June 4, 2009 10:26

    Thank you for your example!

    And:

    : D I do not... I had checked after the comment from Alex

    Well, I'm glad that my first thoughts turned to be false.

    How about this, I made the assumption that thebishop 15 should come after thebishop 10 (they both have the same max tdate), because it has a lower counter:

    MHO%xe> select tfid
      2  ,      src
      3  ,      tdate
      4  ,      bid
      5  ,      sid
      6  ,      fid
      7  from  ( select t.*
      8          ,      count(*) over (partition by t.tfid ) cnt
      9          ,      max(tdate) over (partition by t.tfid ) dt
     10          from   trading t )
     11  order by dt desc
     12  ,        cnt desc
     13  ,        tdate desc;
    
          TFID SRC        TDATE                      BID        SID        FID
    ---------- ---------- ------------------- ---------- ---------- ----------
            13 KP         22-02-2009 00:00:00       5468       7865        111
            13 MS         18-02-2009 00:00:00       4669       6893        110
            10 KP         20-02-2009 00:00:00       1258       6985        106
            10 KP         10-02-2009 00:00:00       1548       9675        100
            10 KP         02-02-2009 00:00:00       5468       7895        101
            15 KP         20-02-2009 00:00:00       1548       6975        118
    
    6 rijen zijn geselecteerd.
    
  • A question about the analytical function used with the GROUP BY clause in SHORT

    Hi all

    I created the following table named myenterprise
    CITY       STOREID    MONTH_NAME TOTAL_SALES            
    ---------- ---------- ---------- ---------------------- 
    paris      id1        January    1000                   
    paris      id1        March      7000                   
    paris      id1        April      2000                   
    paris      id2        November   2000                   
    paris      id3        January    5000                   
    london     id4        Janaury    3000                   
    london     id4        August     6000                   
    london     id5        September  500                    
    london     id5        November   1000
    If I want to find which is the total sales by city? I'll run the following query
    SELECT city, SUM(total_sales) AS TOTAL_SALES_PER_CITY
    FROM myenterprise
    GROUP BY city
    ORDER BY city, TOTAL_SALES_PER_CITY;
    that works very well and produces the expected result, i.e.
    CITY       TOTAL_SALES_PER_CITY   
    ---------- ---------------------- 
    london     10500                  
    paris      17000            
    Now in one of my books SQL (Mastering Oracle SQL) I found another method by using the SUM, but this time as an analytic function. Here's what the method of the book suggests as an alternative to the problem:
    SELECT city, 
           SUM(SUM(total_sales)) OVER (PARTITION BY city) AS TOTAL_SALES_PER_CITY
    FROM myenterprise
    GROUP BY city
    ORDER BY city, TOTAL_SALES_PER_CITY;
    I know that the analytic functions are executed after the GROUP BY clause has been transformed completely and Unlike regular aggregate functions, they return their result for each line belonging to the partitions specified in the partition clause (if there is a defined partition clause).

    Now my problem is that I do not understand what we have to use two functions SUM? If we only use one only, i.e.
    SELECT city, 
           SUM(total_sales) OVER (PARTITION BY city) AS TOTAL_SALES_PER_CITY
    FROM myenterprise
    GROUP BY city
    ORDER BY city, TOTAL_SALES_PER_CITY;
    This generates the following error:
    Error starting at line 2 in command:
    SELECT city, 
           SUM(total_sales) OVER (PARTITION BY city) AS TOTAL_SALES_PER_CITY
    FROM myenterprise
    GROUP BY city
    ORDER BY city, TOTAL_SALES_PER_CITY
    Error at Command Line:2 Column:11
    Error report:
    SQL Error: ORA-00979: not a GROUP BY expression
    00979. 00000 -  "not a GROUP BY expression"
    *Cause:    
    *Action:
    The error is generated for the line 2 column 11 which is, for the expression SUM (total_sales), well it's true that total_sales does not appear in the GROUP BY clause, but this should not be a problem, it has been used in an analytical function, so it is evaluated after the GROUP BY clause.

    So here's my question:

    Why use SUM (SUM (total_sales)) instead of SUM (total_sales)?


    Thanks in advance!
    :)





    In case you are interested, that's my definition of the table:
    DROP TABLE myenterprise;
    CREATE TABLE myenterprise(
    city VARCHAR2(10), 
    storeid VARCHAR2(10),
    month_name VARCHAR2(10),
    total_sales NUMBER);
    
    INSERT INTO myenterprise(city, storeid, month_name, total_sales)
      VALUES ('paris', 'id1', 'January', 1000);
    INSERT INTO myenterprise(city, storeid, month_name, total_sales)
      VALUES ('paris', 'id1', 'March', 7000);
    INSERT INTO myenterprise(city, storeid, month_name, total_sales)
      VALUES ('paris', 'id1', 'April', 2000);
    INSERT INTO myenterprise(city, storeid, month_name, total_sales)
      VALUES ('paris', 'id2', 'November', 2000);
    INSERT INTO myenterprise(city, storeid, month_name, total_sales)
      VALUES ('paris', 'id3', 'January', 5000);
    INSERT INTO myenterprise(city, storeid, month_name, total_sales)
      VALUES ('london', 'id4', 'Janaury', 3000);
    INSERT INTO myenterprise(city, storeid, month_name, total_sales)
      VALUES ('london', 'id4', 'August', 6000);
    INSERT INTO myenterprise(city, storeid, month_name, total_sales)
      VALUES ('london', 'id5', 'September', 500);
    INSERT INTO myenterprise(city, storeid, month_name, total_sales)
      VALUES ('london', 'id5', 'November', 1000);
    Edited by: dariyoosh on April 9, 2009 04:51

    It is clear that thet Analytics is reduntant here...
    You can even use AVG or any analytic function...

    SQL> SELECT city,
      2         avg(SUM(total_sales)) OVER (PARTITION BY city) AS TOTAL_SALES_PER_CITY
      3  FROM myenterprise
      4  GROUP BY city
      5  ORDER BY city, TOTAL_SALES_PER_CITY;
    
    CITY       TOTAL_SALES_PER_CITY
    ---------- --------------------
    london                    10500
    paris                     17000
    
  • SQL using the analytic function


    Hi all

    I want a help in the creation of my SQL query to retrieve the data described below:

    I have a test of sample table containing data as below:

    State ID Desc

    MICHAEL 1 T1

    ACTIVE 2 T2

    T3 3 SUCCESS

    DISABLE THE T4 4

    The thing I want to do is to select all the lines with an ACTIVE status in the table but is there is no ACTIVE status, my request will give me the last line with MICHAEL status.

    I can do this in a single request by using the analytical function for example, if yes can yiu help me on the request of unpacking.

    Kind regards

    Raluce

    Something like that?

    I had to fix it.

    with testdata until)
    Select 1 id, "T1" dsc "DISABLED" status of Union double all the
    Select 2 id, 'T2' dsc, the status "ACTIVE" of all the double union
    Select id 3, "T3" dsc, the status of 'SUCCESS' of all the double union
    Select 4 id, "T4" dsc "DISABLED" status of double
    )

    Select
    ID
    dsc
    status
    of testdata
    where
    status =
    -case when (select count (*) in testdata where status = 'ACTIVE') > 0
    then 'ACTIVE '.
    Another 'DISABLED '.
    end
    and)
    ID in (select id from testdata where status = ' ACTIVE')
    or
    ID = (select max (id) in testdata when status = 'DISABLED')
    )

    STATE ID DSC

    '2' 'T2' 'ACTIVE '.

    Maybe it's more efficient

    Select
    ID
    dsc
    status
    of testdata
    where
    status =
    -case when (select count (*) in testdata where status = 'ACTIVE') > 0
    then 'ACTIVE '.
    Another 'DISABLED '.
    end
    and
    ID =)
    -case when (select count (*) in testdata where status = 'ACTIVE') > 0
    then id
    on the other
    (select max (id) in testdata when status = 'DISABLED')
    end
    )

    Post edited by: correction of chris227

    Post edited by: chris227
    extended

  • How to use Group by in the analytic function

    I need to write the Department that has the minimum wage in a row. She must be with analytical function, but I have problem in group by. I can't use min() without group by.

    Select * from (min (sal) select min_salary, deptno, RANK() ON RN (ORDER BY sal CSA, CSA rownum) of the Group of emp by deptno) < 20 WHERE RN order by deptno;

    Published by: senza on 6.11.2009 16:09

    Hello

    senza wrote:
    I need to write the Department that has the minimum wage in a row. She must be with analytic function

    Therefore with an analytic function? Looks like it is a duty.

    The best way to get these results is with an aggregate, not analysis, function:

    SELECT      MIN (deptno) KEEP (DENSE_RANK FIRST ORDER BY sal)     AS dept_with_lowest_sal
    FROM      scott.emp
    ;
    

    Note that you do not need a subquery.
    This can be modififed if, for example, you want the lowest Department with the sal for each job.

    But if your mission is to use an analytical function, that's what you have to do.

    but I have problem in group by. I can't use min() without group by.

    Of course, you can use MIN without GROUP BY. Almost all of the aggregate (including MIN) functions have analytical equivalents.
    However, in this issue, you don't need to. The best analytical approach RANK only, not use MIN. If you ORDER BY sal, the lines with rank = 1 will have the minimum wage.

    Select * from (min (sal) select min_salary, deptno, RANK() ON RN (ORDER BY sal CSA, CSA rownum) of the Group of emp by deptno) WHERE the RN< 20="" order="" by="">

    Try to select plain old sal instead of MIN (sal) and get reid of the GROUP BY clause.

    Add ROWNUM in the ORDER BY clause is to make RANK return the same result as ROW_NUMBER, every time that it is a tie for the sal, the output will still be distinct numbers. which line gets the lower number will be quite arbitrary, and not necessarily the same every time you run the query. For example, MARTIN and WARD have exactly the same salary, 1250. The query you posted would assign rn = 4 to one of them and rn = 5 to another. Who gets 4? It's a toss-up. It could be MARTIN the first time you try, and WARD the next. (In fact, in a very small table like scott.emp, it probably will be consistent, but always arbitrary.) If this is what you want, it would be clearer and simpler just to use ROW_NUMEBR instead of RANK.

  • Satellite A100-220: is it possible to extend with the WLan functionality?

    As in the title: is it possible to extend the Satellite A100-220 - without builtin WLAN - with the WLAN functionality?

    Hello

    It is not a problem to use the WLAN option on the unit without internal WLAN card. You can also use the little USB WLAN stick. It is very small and easy to configure. Before you buy something to pick up the info from your local dealer.

  • How to find the number of data items in a file written with the ArryToFile function?

    I wrote a table of number in 2 groups of columns in a file using LabWindows/CVI ArrayToFile... Now, if I want to read the file with the FileToArray function so how do I know the number of items in the file. during the time of writing, I know how many elements array to write. But assume that I want the file to be read at a later time, then how to find the number of items in the file, so that I can read the exact number and present it. Thank you all

    Hello

    I start with the second question:

    bytes_read = ReadLine (file_handle, line_buffer, maximum_bytes);

    the second argument is the buffer to store the characters read, so it's an array of characters; It must be large enough to hold maximum_bytes the value NULL, if char [maximum_butes + 1]

    So, obviously the number of lines in your text tiles can be determined in a loop:

    Open the file

    lines = 0;

    While (ReadLine () > 0)

    {

    lines ++;

    }

    Close the file

  • Extremely confused with the extremely complicated presentation of BlackBerry applications

    Please help me, I'm very confused with the extremely complicated presentation of BlackBerry applications.

    I made an app, the name is "App", I got signed and downloaded the App using the emulator Ripple, with the word 'HelloWorld' signature (password not true of course).

    Now, I do another application, with the name of the App (B) I CAN'T use the same HelloWorld signature password because the emulator Ripple said "Oh, Snap!" Build request failed with the message: [ERROR] error: Code signing request failed because this version of the application or the package has been previously signed. If please increment the version (s) and try to connect again. »

    Now, I don't understand and I am EXTREMELY confused with the presentation of the model of application for Blackberry.

    I should first request, for an another RDK and COMINCO files create a new signature for App B password?

    Or should I just increment the bundle version? the problem is when I tried it on Blackberry server seems to give narrations of error message that the APP ID is already used.

    Okay, I'm officially stuck and I don't know what to do. Can any developer of blackberry here or the official employee blackberry tells me what I need to do?

    Thank you.

    I do not use ripple either but can see what the problem is (nothing to do with the ripple)

    Open your file config.xml for the new project (probably copied 'App A'?)

    In the widget section, change the id to something different, for example com.example.AppB

    Save - reload the ripple and make sure you have the new ID in the Config-> Widget

    All sorted

  • Problems with the Row_Number function

    I have problems with the Row_Number function. I use to assign line numbers to records where a student has a note of passage on a module and the exclusion of the modules failed (I want to show her a 0 as the line number for the modules failed). The problem is that when I try to use a condition, the report still assigns a line number to a defective module if it does not display it (it shows a 0 I wanted to show him). The results are displayed as follows:

    Line number Module Grade
    1ModuleAPass
    2ModuleBPass
    0ModuleCIn case of failure
    4 (instead of 3)ModuleDPass

    How can I make him jump to assign a line number to all the modules failed? Please help.

    Thank you.

    Thank you very much, Melanie. I made changes to query as per your suggestion, which is a union of the modules failed and passed (using row_number on success modules). Thanks for the solution.

  • Problem with the GetParameter() function in IScript

    Hello

    I am facing a problem with the GetParameter() function in IScript. I created a URL below and appellant IScript

    GenerateScriptContentURL ("EMPLOYEE", "MFC", Record.WEBLIB_REPT_SJ, Field.ISCRIPT1, "FieldFormula", "IScript_GetAttachment"). "? FileName =' | & AttachUserFileURL;

    before generating URLS, I'm just encrypt the name of the ZIP file & assignment in the variable string & AttachUserFileURL I concatenated in link above.

    and try to take the value encrypted text of decryption by %Request.GetParameter("FileName") in IScript who isn't able to get special characters such as +, is.

    Please get this.

    Thank you

    Edited by: 936729 may 25, 2012 03:35

    + and = are allowed in URLS. You just have to URL encode like this EncodeURLForQueryString(&AttachUserFileURL) before adding them to your URL:

    GenerateScriptContentURL("EMPLOYEE", "HRMS", Record.WEBLIB_REPT_SJ, Field.ISCRIPT1, "FieldFormula", "IScript_GetAttachment") | "?FileName=" | EncodeURLForQueryString(&AttachUserFileURL);
    

    Here is the entrance of PeopleBook for EncodeURLForQueryString:

    http://docs.Oracle.com/CD/E28394_01/pt852pbh1/Eng/psbooks/TPCL/htm/tpcl02.htm#_6453b1b1_1355ab71343__503e

Maybe you are looking for

  • Cannot sync iPod after updating to iTunes

    I've updated to the latest version of iTunes last week and now I can't sync my iPod. My computer recognizes that USB is fired, so it's not the cable. But the drop on iTunes does not select the synchronization option to allow me to click on it. I use

  • Hey how can guys, I use a microphone for Sony HDR-PJ410?

    What are the external microphones compatible with this camcorder? I asked support Chat and say what they "You must first connect the external microphone to the microphone of the camera port". Where is the microphone port?

  • Windows vista service pack 1 did not install correctly

    prevents me to install service pack 2.  said SP 1 is successful, but later I thought that it must be installed again. ??

  • DeskJet f380 recognized as printer, but not scanner

    I have a F380 all-in-one on my network connected to an old XP box. I have a new box of Win7 that I am connected to the network, and although I can add the printer as the printer, it is not recognized as a scanner. The function 'Add a device' HP is us

  • AE does not not after download of Yosemite

    I just downloaded Yosemite and now AE will not open. I get this error message... "You can't use this version of the"Adobe After Effects CS6"application with this version of Mac OS X."I see that this guy, Todd Kopriva, keeps post this link (updates fo