Help with analytical functions
Hi allI'm on Oracle 11g DB and have records in the table that look like this
transaction_ref line_type description
-------------------- -------------- ---------------
10 DETAIL abc123
10 DETAIL abc978
10 DETAIL test
10 DETAIL test
10 DETAIL test
20 DETAIL abcy
20 DETAIL abc9782
20 DETAIL test12
20 DETAIL test32
Analytical, I generate rownumber by Ref single transaction as follows:SELECT row_number() over (partition by transaction_ref order by 1) rownumber
FROM mytable ;
transaction_ref line_type description rownumber
-------------------- -------------- --------------- ----------------
10 DETAIL abc123 1
10 DETAIL abc978 2
10 DETAIL test 3
10 DETAIL test 4
10 DETAIL test 5
20 DETAIL abcy 1
20 DETAIL abc9782 2
20 DETAIL test12 3
20 DETAIL test32 4
However, for my needs, I need my rownumber as follows:with the exception of number 1 of Clotilde, I want to increment the number of lines per 3
transaction_ref line_type description rownumber
-------------------- -------------- --------------- ----------------
10 DETAIL abc123 1
10 DETAIL abc978 4
10 DETAIL test 7
10 DETAIL test 10
10 DETAIL test 13
20 DETAIL abcy 1
20 DETAIL abc9782 4
20 DETAIL test12 7
20 DETAIL test32 10
....
Thank youMaëlle
Published by: user565538 on June 4, 2011 17:32
Published by: user565538 on June 4, 2011 17:34
Published by: user565538 on June 4, 2011 17:35
with mytable as (
select 10 transaction_ref,'DETAIL' line_type,'abc123' description from dual union all
select 10,'DETAIL','abc978' from dual union all
select 10,'DETAIL','test' from dual union all
select 10,'DETAIL','test' from dual union all
select 10,'DETAIL','test' from dual union all
select 20,'DETAIL','abcy' from dual union all
select 20,'DETAIL','abc9782' from dual union all
select 20,'DETAIL','test12' from dual union all
select 20,'DETAIL','test32' from dual
)
SELECT transaction_ref,
line_type,
description,
(row_number() over (partition by transaction_ref order by 1) - 1) * 3 + 1 rownumber
FROM mytable
/
TRANSACTION_REF LINE_T DESCRIP ROWNUMBER
--------------- ------ ------- ----------
10 DETAIL abc123 1
10 DETAIL abc978 4
10 DETAIL test 7
10 DETAIL test 10
10 DETAIL test 13
20 DETAIL abcy 1
20 DETAIL abc9782 4
20 DETAIL test12 7
20 DETAIL test32 10
9 rows selected.
SQL>
SY.
Tags: Database
Similar Questions
-
More help with analytical functions
I had great hellp here yesterday and I need once more today. I guess I'm still not able to get a solid understanding of analytical functions. So here's the problem:
table with 3 collars:
product_id (int), sale_date (to date), count_sold (int) - each file show that the number of items have been sold for the product at a given date.
The query should return the 3 passes of the table AND a fourth column that contains the date with the best sales of the product. If there are two or more dates with equal sales, the last being is chosen.
Is this possible using an analytical function appropriately and without using a subquery?
example:
product_id, sale_date, count_sold, high_sales_date
1, 01-01-2008, 10, 05/10/2008,.
1, 2008-03-10, 20, 10/05/2008
1, 10/04/2008, 25, 05/10/2008
1, 10/05/2008, 25, 05/10/2008
1, 01/06/2008, 22, 05/10/2008
2, 05/12/2008, 12, 05/12/2008
2, 06/01/2009, 10, 05/12/2008
Thank youHello
Try this:
SELECT product_id , sale_date , count_sold , FIRST_VALUE (sale_date) OVER ( PARTITION BY product_id ORDER BY count_sold DESC , sale_date DESC ) AS high_sales_date FROM table_x;
If you would post INSERT statements for your data, then I could test it.
Focus issue: Why use FIRST_VALUE with descending order and not LAST_VALUE (ASCending) ORDER of default?
-
Help with analytical functions - Windowing
Hello
I'm using Oracle 11.2.0.4.0.
I want to do the sum of all amounts for each window of 3 days from the date of the oldest rolling. I also want to name each window with the date limit for the period of 3 days.
My requirement is slightly more complicated, but I use this example to illustrate what I'm trying to
create table test (dt date, amt, run_id number);
Insert test values (to_date (' 22/04/2015 ',' dd/mm/yyyy'), 1, 1);
Insert test values (to_date (' 23/04/2015 ',' dd/mm/yyyy'), 1, 1);
Insert test values (to_date (' 24/04/2015 ',' dd/mm/yyyy'), 1, 1);
Insert test values (to_date (' 25/04/2015 ',' dd/mm/yyyy'), 1, 1);
Insert test values (to_date (' 27/04/2015 ',' dd/mm/yyyy'), 5, 1);
Insert test values (to_date (' 28/04/2015 ',' dd/mm/yyyy'), 2, 1);
Insert test values (to_date (' 29/04/2015 ',' dd/mm/yyyy'), 1, 1);
Insert test values (to_date (' 04/30/2015 ',' dd/mm/yyyy'), 1, 1);
Insert test values (to_date (' 01/05/2015 ',' dd/mm/yyyy'), 1, 1);
Insert test values (to_date (' 02/05/2015 ',' dd/mm/yyyy'), 1, 1);
Insert test values (to_date (' 03/05/2015 ',' dd/mm/yyyy'), 1, 1);
Insert test values (to_date (' 04/05/2015 ',' dd/mm/yyyy'), 1, 1);
The output should look like the example below. The period column requires
to show the end of each 3-day study:
AMT DT SUM_PER_PERIOD PERIOD
22/04/2015 1 1 24/04/2015
23/04/2015 1 2 24/04/2015
24/04/2015 1 3 24/04/2015
25/04/2015 1 3 27/04/2015
27/04/2015 5 6 27/04/2015
28/04/2015 2 7 30/04/2015
29/04/2015 20 27 30/04/2015
30/04/2015 30 52 30/04/2015
05/01/2015 5 55 3/05/2015
05/02/2015 5 50 3/05/2015
05/02/2015 10 50 3/05/2015
05/03/2015 1 21/3/05/2015
All I can manage this is
Select dt
TN
, sum (amt) on sum_per_period (PARTITION BY run_id ORDER BY dt vary from 2 PAST current line)
of the test
order by dt;
Can anyone help?
It's very kind of you to give the insert and create instructions... but I corrected the data a bit
It does not match the output see you below
starting from 29/04, you forgot to change the dates and numbers of...
insert into test values (to_date('22/04/2015','dd/mm/yyyy'),1,1); insert into test values (to_date('23/04/2015','dd/mm/yyyy'),1,1); insert into test values (to_date('24/04/2015','dd/mm/yyyy'),1,1); insert into test values (to_date('25/04/2015','dd/mm/yyyy'),1,1); insert into test values (to_date('27/04/2015','dd/mm/yyyy'),5,1); insert into test values (to_date('28/04/2015','dd/mm/yyyy'),2,1); insert into test values (to_date('29/04/2015','dd/mm/yyyy'),20,1); insert into test values (to_date('30/04/2015','dd/mm/yyyy'),30,1); insert into test values (to_date('01/05/2015','dd/mm/yyyy'),5,1); insert into test values (to_date('02/05/2015','dd/mm/yyyy'),5,1); insert into test values (to_date('02/05/2015','dd/mm/yyyy'),10,1); insert into test values (to_date('03/05/2015','dd/mm/yyyy'),1,1);
your periods will change if you insert a new first date...
so I guess you want a specific date... in this case 22/04/2015 and a specific end date
creation of periods from this first date and then grouping of these periods is easier with a first fixed date and a delta of 3 days.
the first step is to match the periods to your data (adapted)
with periods as ( select date_start + (level-1) * period_days period_start, date_start + level * period_days period_end, period_days from ( select to_date('21/04/2015', 'dd/mm/yyyy') date_start, to_date('04/05/2015', 'dd/mm/yyyy') date_end, 3 period_days from dual) connect by date_start + level * period_days < date_end) select * from test t, periods p where t.dt > p.period_start and t.dt <= p.period_end
This gives your data with the dates of beginning and ending period
DT AMT RUN_ID PERIOD_START PERIOD_END PERIOD_DAYS 22/04/2015 1
1
21/04/2015 24/04/2015 3
23/04/2015 1
1
21/04/2015 24/04/2015 3
24/04/2015 1
1
21/04/2015 24/04/2015 3
25/04/2015 1
1
24/04/2015 27/04/2015 3
27/04/2015 5
1
24/04/2015 27/04/2015 3
28/04/2015 2
1
27/04/2015 30/04/2015 3
29/04/2015 20
1
27/04/2015 30/04/2015 3
30/04/2015 30
1
27/04/2015 30/04/2015 3
05/01/2015 5
1
30/04/2015 05/03/2015 3
05/02/2015 5
1
30/04/2015 05/03/2015 3
05/02/2015 10
1
30/04/2015 05/03/2015 3
05/03/2015 1
1
30/04/2015 05/03/2015 3
and then sum the amt during the 3 days
with periods as ( select date_start + (level-1) * period_days period_start, date_start + level * period_days period_end, period_days from ( select to_date('21/04/2015', 'dd/mm/yyyy') date_start, to_date('04/05/2015', 'dd/mm/yyyy') date_end, 3 period_days from dual) connect by date_start + level * period_days < date_end) select t.dt, t.amt, sum(amt) over (order by t.dt range between 2 preceding and current row) sum_per_period, p.period_end period from test t, periods p where t.dt > p.period_start and t.dt <= p.period_end
giving your output as requested:
DT AMT SUM_PER_PERIOD PERIOD 22/04/2015 1
1
24/04/2015 23/04/2015 1
2
24/04/2015 24/04/2015 1
3
24/04/2015 25/04/2015 1
3
27/04/2015 27/04/2015 5
6
27/04/2015 28/04/2015 2
7
30/04/2015 29/04/2015 20
27
30/04/2015 30/04/2015 30
52
30/04/2015 05/01/2015 5
55
05/03/2015 05/02/2015 5
50
05/03/2015 05/02/2015 10
50
05/03/2015 05/03/2015 1
21
05/03/2015 -
Need help with analytical function (LAG)
The requirement is as I have a table with described colums
col1 County flag Flag2
ABC 1 Y Y
XYZ 1 Y Y
XYZ 1 O NULL
xyz *2* N N
XYZ 2 Y NULL
DEF 1 Y Y
DEF 1 N NULL
To get the columns Flag2
1 assign falg2 as indicator for rownum = 1
2 check the colm1, count of current line with colm1, Earl of the previous line. The colm1 and the NTC are identical, should assign null...
Here's the query I used to get the values of Flag2
SELECT colm1, count, flag
BOX WHEN
LAG(Count, 1,null) OVER (PARTITION BY colm1 ORDER BY colm1 DESC NULLS LAST) IS NULL
and LAG(flag, 1, NULL) PLUS (SCORE FROM colm1 ORDER BY colm1, cycle DESC NULLS LAST) IS NULL
THEN the flag
END AS Flag2
FROM table1
but the query above returns the o/p below which is false
col1_ County flag Flag2
ABC 1 Y Y
XYZ 1 Y Y
XYZ 1 O NULL
xyz *2* N NULL
XYZ 2 Y NULL
DEF 1 Y Y
DEF 1 N NULL
Thank you
Published by: user9370033 on April 8, 2010 23:25Well, you have not enough explained your full requirement in this
1 assign falg2 as indicator for rownum = 1
2 check the colm1, count of current line with colm1, Earl of the previous line. The colm1 and the NTC are identical, should assign null...as you say not what Flag2 must be set on if com1 and cnt are not the same as the previous row.
But how about this as my first guess what you mean...
SQL> with t as (select 'abc' as col1, 1 as cnt, 'Y' as flag from dual union all 2 select 'xyz', 1, 'Y' from dual union all 3 select 'xyz', 1, 'Y' from dual union all 4 select 'xyz', 2, 'N' from dual union all 5 select 'xyz', 2, 'Y' from dual union all 6 select 'def', 1, 'Y' from dual union all 7 select 'def', 1, 'N' from dual) 8 -- END OF TEST DATA 9 select col1, cnt, flag 10 ,case when lag(col1) over (order by col1, cnt) is null then flag 11 when lag(col1) over (order by col1, cnt) = col1 and 12 lag(cnt) over (order by col1, cnt) = cnt then null 13 else flag 14 end as flag2 15 from t 16 / COL CNT F F --- ---------- - - abc 1 Y Y def 1 Y Y def 1 N xyz 1 Y Y xyz 1 Y xyz 2 Y Y xyz 2 N 7 rows selected. SQL>
-
version 9.2
Here is a sample
Output: number of separate orders for each dateWITH temp AS (SELECT 10 ID, TRUNC (SYSDATE - 1) dt, 101 ord_id FROM DUAL UNION SELECT 11 ID, TRUNC (SYSDATE - 1) dt, 101 ord_id FROM DUAL UNION SELECT 11 ID, TRUNC (SYSDATE) dt, 103 ord_id FROM DUAL UNION SELECT 13 ID, TRUNC (SYSDATE) dt, 104 ord_id FROM DUAL) SELECT * FROM temp
Dt Count 1/25 1 1/26 2
ME_XE?WITH temp AS 2 (SELECT 10 ID, TRUNC (SYSDATE - 1) dt, 101 ord_id 3 FROM DUAL 4 UNION 5 SELECT 11 ID, TRUNC (SYSDATE - 1) dt, 101 ord_id 6 FROM DUAL 7 UNION 8 SELECT 11 ID, TRUNC (SYSDATE) dt, 103 ord_id 9 FROM DUAL 10 UNION 11 SELECT 13 ID, TRUNC (SYSDATE) dt, 104 ord_id 12 FROM DUAL) 13 SELECT dt, count(distinct ord_id) 14 FROM temp 15 group by dt; DT COUNT(DISTINCTORD_ID)-------------------------- ---------------------25-JAN-2009 12 00:00 126-JAN-2009 12 00:00 2 2 rows selected. Elapsed: 00:00:00.01ME_XE?ME_XE?
-
get a single result with analytical functions
SELECT delrazjn. TYPE, delrazjn. DATE, delrazjn. USER, delrazjn. The IID OF the ZKET_DR delraz, ZKET_DR_JN delrazjn
WHERE delraz. IID = delrazjn. IID
AND (delrazjn. TYPE = 'UP2' GOLD delrazjn. TYPE = 'An increase in 1') AND delrazjn. IID_N IS NOT NULL
This is an example of my sql. But there is more than one result of delrazjn. IID. How can I get the first enterd in DB and ignore others, there will only be one result and no more.
The first result came in, that I can see for delrazjn. DATE.
I try to do that with analytical functions, but without success.You're right, I told you that I can't test the code.
I hope this works now:
SELECT delrazjn.TYPE, delrazjn.DATE, delrazjn.USER, delrazjn.IID FROM ZKET_DR delraz, ZKET_DR_JN delrazjn WHERE delraz.IID = delrazjn.IID AND (delrazjn.TYPE = 'UP2' OR delrazjn.TYPE = 'UP1') AND delrazjn.IID_N IS NOT NULL and delrazjn.date=(select min(d.date) from ZKET_DR_JN d where d.type=delrazjn.type and d.user=delrazjn.user)
-
With the help of analytical functions
Hi all
I'm using ODI 11 g (11.1.1.3.0) and I'm doing an interface using analytical functions in the column map, something like below.
Salary on (partition of...)
The problem is that when ODI saw the sum he considers this an aggregate function and the group. Is it possible to understand that it is not an aggregate in ODI function?
I tried to create an option to specify whether it is analytic, then updated IKM with no luck.
< % if (odiRef.getUserExit("ANALYTIC").equals("1")) {% >}
< %} else {% >}
< % = odiRef.getGrpBy (i) % >
< % = odiRef.getHaving (i) % >
< %} % >
Thanks in advanceSeth,
Try this thing posted by Uli:
http://www.business-intelligence-quotient.com/?p=905 -
Problem with analytical function for date
Hi all
ORCL worm:
Oracle Database 11 g Enterprise Edition Release 11.2.0.2.0 - 64 bit Production
PL/SQL Release 11.2.0.2.0 - Production
"CORE 11.2.0.2.0 Production."
AMT for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
I have a problem with the analtical for the date function. I'm trying to group records based on timestamp, but I'm failing to do.
Could you please help me find where I'm missing.
THXThis is the subquery. No issue with this. I'm just posting it for reference. select sum(disclosed_cost_allocation.to_be_paid_amt) amt, substr(reference_data.ref_code,4,10) cd, to_char(external_order_status.status_updated_tmstp, 'DD-MON-YYYY HH24:MI:SS') tmstp, DISCLOSED_CLOSING_COST.DISCLOSED_CLOSING_COST_ID id FROM Deal.Fee_Mapping_Definition , Deal.Fee_Index_Definition , Deal.Fee_Closing_Cost_Item, Deal.Closing_Cost, Deal.Document_Generation_Request, deal.PRODUCT_REQUEST, deal.External_Order_Request, deal.External_Order_Status, deal. DISCLOSED_CLOSING_COST, deal.DISCLOSED_COST_ALLOCATION, deal.reference_data WHERE Fee_Mapping_Definition.Fee_Code = Fee_Index_Definition.Fee_Code AND Fee_Index_Definition.Fee_Index_Definition_Id = Fee_Closing_Cost_Item.Fee_Index_Definition_Id AND Fee_Closing_Cost_Item.Closing_Cost_Id = Closing_Cost.Closing_Cost_Id AND CLOSING_COST.PRODUCT_REQUEST_ID = Document_Generation_Request.Product_Request_Id AND closing_cost.product_request_id = product_request.product_request_id AND Product_Request.Deal_Id = External_Order_Request.Deal_Id AND external_order_request.external_order_request_id = external_order_status.external_order_request_id AND external_order_request.external_order_request_id = disclosed_closing_cost.external_order_request_id AND DISCLOSED_CLOSING_COST. DISCLOSED_CLOSING_COST_ID = DISCLOSED_COST_ALLOCATION.DISCLOSED_CLOSING_COST_ID AND Fee_Index_Definition.Fee_Index_Definition_Id = Disclosed_Closing_Cost.Fee_Index_Definition_Id AND Fee_Mapping_Definition.Document_Line_Series_Ref_Id = Reference_Data.Reference_Data_Id AND Document_Generation_Request.Document_Package_Ref_Id IN (7392 ,2209 ) AND External_Order_Status.Order_Status_Txt = ('GenerationCompleted') AND Fee_Mapping_Definition.Document_Line_Series_Ref_Id IN ( 7789, 7788,7596 ) AND FEE_MAPPING_DEFINITION.DOCUMENT_TYPE_REF_ID = 1099 AND Document_Generation_Request.Product_Request_Id IN (SELECT PRODUCT_REQUEST.PRODUCT_REQUEST_id FROM Deal.Disclosed_Cost_Allocation, Deal.Disclosed_Closing_Cost, DEAL.External_Order_Request, DEAL.PRODUCT_REQUEST, Deal.Scenario WHERE Disclosed_Cost_Allocation.Disclosed_Closing_Cost_Id = Disclosed_Closing_Cost.Disclosed_Closing_Cost_Id AND Disclosed_Closing_Cost.External_Order_Request_Id = External_Order_Request.External_Order_Request_Id AND External_Order_Request.Deal_Id = Product_Request.Deal_Id AND product_request.scenario_id = scenario.scenario_id AND SCENARIO.SCENARIO_STATUS_TYPE_REF_ID = 7206 AND product_request.servicing_loan_acct_num IS NOT NULL AND product_request.servicing_loan_acct_num = 0017498379 --AND Disclosed_Cost_Allocation.Disclosed_Cost_Allocation_Id = 5095263 ) GROUP BY DISCLOSED_CLOSING_COST.DISCLOSED_CLOSING_COST_ID, External_Order_Status.Status_Updated_Tmstp, Reference_Data.Ref_Code, disclosed_cost_allocation.to_be_paid_amt order by 3 desc, 1 DESC; Result: 2000 1304-1399 28-JUL-2012 19:49:47 6880959 312 1302 28-JUL-2012 19:49:47 6880958 76 1303 28-JUL-2012 19:49:47 6880957 2000 1304-1399 28-JUL-2012 18:02:16 6880539 312 1302 28-JUL-2012 18:02:16 6880538 76 1303 28-JUL-2012 18:02:16 6880537 But, when I try to group the timestamp using analytical function, select amt ,cd ,rank() over(partition by tmstp order by tmstp desc) rn from (select sum(disclosed_cost_allocation.to_be_paid_amt) amt, substr(reference_data.ref_code,4,10) cd, to_char(external_order_status.status_updated_tmstp, 'DD-MON-YYYY HH24:MI:SS') tmstp, DISCLOSED_CLOSING_COST.DISCLOSED_CLOSING_COST_ID id FROM Deal.Fee_Mapping_Definition , Deal.Fee_Index_Definition , Deal.Fee_Closing_Cost_Item, Deal.Closing_Cost, Deal.Document_Generation_Request, deal.PRODUCT_REQUEST, deal.External_Order_Request, deal.External_Order_Status, deal. DISCLOSED_CLOSING_COST, deal.DISCLOSED_COST_ALLOCATION, deal.reference_data WHERE Fee_Mapping_Definition.Fee_Code = Fee_Index_Definition.Fee_Code AND Fee_Index_Definition.Fee_Index_Definition_Id = Fee_Closing_Cost_Item.Fee_Index_Definition_Id AND Fee_Closing_Cost_Item.Closing_Cost_Id = Closing_Cost.Closing_Cost_Id AND CLOSING_COST.PRODUCT_REQUEST_ID = Document_Generation_Request.Product_Request_Id AND closing_cost.product_request_id = product_request.product_request_id AND Product_Request.Deal_Id = External_Order_Request.Deal_Id AND external_order_request.external_order_request_id = external_order_status.external_order_request_id AND external_order_request.external_order_request_id = disclosed_closing_cost.external_order_request_id AND DISCLOSED_CLOSING_COST. DISCLOSED_CLOSING_COST_ID = DISCLOSED_COST_ALLOCATION.DISCLOSED_CLOSING_COST_ID AND Fee_Index_Definition.Fee_Index_Definition_Id = Disclosed_Closing_Cost.Fee_Index_Definition_Id AND Fee_Mapping_Definition.Document_Line_Series_Ref_Id = Reference_Data.Reference_Data_Id AND Document_Generation_Request.Document_Package_Ref_Id IN (7392 ,2209 ) AND External_Order_Status.Order_Status_Txt = ('GenerationCompleted') AND Fee_Mapping_Definition.Document_Line_Series_Ref_Id IN ( 7789, 7788,7596 ) AND FEE_MAPPING_DEFINITION.DOCUMENT_TYPE_REF_ID = 1099 AND Document_Generation_Request.Product_Request_Id IN (SELECT PRODUCT_REQUEST.PRODUCT_REQUEST_id FROM Deal.Disclosed_Cost_Allocation, Deal.Disclosed_Closing_Cost, DEAL.External_Order_Request, DEAL.PRODUCT_REQUEST, Deal.Scenario WHERE Disclosed_Cost_Allocation.Disclosed_Closing_Cost_Id = Disclosed_Closing_Cost.Disclosed_Closing_Cost_Id AND Disclosed_Closing_Cost.External_Order_Request_Id = External_Order_Request.External_Order_Request_Id AND External_Order_Request.Deal_Id = Product_Request.Deal_Id AND product_request.scenario_id = scenario.scenario_id AND SCENARIO.SCENARIO_STATUS_TYPE_REF_ID = 7206 AND product_request.servicing_loan_acct_num IS NOT NULL AND product_request.servicing_loan_acct_num = 0017498379 --AND Disclosed_Cost_Allocation.Disclosed_Cost_Allocation_Id = 5095263 ) GROUP BY DISCLOSED_CLOSING_COST.DISCLOSED_CLOSING_COST_ID, External_Order_Status.Status_Updated_Tmstp, Reference_Data.Ref_Code, disclosed_cost_allocation.to_be_paid_amt order by 3 desc, 1 DESC); Result: 312 1302 1 2000 1304-1399 1 76 1303 1 312 1302 1 2000 1304-1399 1 76 1303 1 Required output: 312 1302 1 2000 1304-1399 1 76 1303 1 312 1302 2 2000 1304-1399 2 76 1303 2
Rod.Hey, Rod,
My guess is that you want:
, dense_rank () over (order by tmstp desc) AS rn
RANK means you'll jump numbers when there is a link. For example, if all 3 rows have the exact same last tmstp, all 3 rows would be assigned number 1, GRADE would assign 4 to the next line, but DENSE_RANK attributes 2.
"PARTITION x" means that you are looking for a separate series of numbers (starting with 1) for each value of x. If you want just a series of numbers for the entire result set, then do not use a PARTITION BY clause at all. (PARTITION BY is never required.)
Maybe you want to PARTITIONNER IN cd. I can't do it without some examples of data, as well as an explanation of why you want the results of these data.
You certainly don't want to PARTITION you BY the same expression ORDER BY; It simply means that all the lines are tied for #1.I hope that answers your question.
If not, post a small example data (CREATE TABLE and only relevant columns, INSERT statements) for all of the tables involved and also publish outcomes from these data.
Explain, using specific examples, how you get these results from these data.
Simplify the problem as much as possible.
Always tell what version of Oracle you are using.
See the FAQ forum {message identifier: = 9360002}Published by: Frank Kulash, August 1, 2012 13:20
-
Is there a shorter way (better) with analytical functions?
Here's a little test scenario:
For each id, I want to add all the values of m only when they have a different type. It would be a long journey:create table t ( id number, pos number, typ number, m number); insert into t values (1,1,1,100); insert into t values (1,2,1,100); insert into t values (1,3,2, 50); insert into t values (2,1,3, 30); insert into t values (2,2,4, 70); insert into t values (3,1,1,100); insert into t values (3,2,2, 50); insert into t values (4,1,3, 30); insert into t values (4,2,5, 80); insert into t values (4,3,3, 30); insert into t values (5,1,3, 30); insert into t values (5,2,6, 30); insert into t values (6,1,2, 50); insert into t values (6,2,7, 50); insert into t values (6,3,2, 50); insert into t values (7,1,4, 70); insert into t values (7,2,4, 70); insert into t values (7,3,4, 70);
but I wonder, is it possible to get this result with a single statement select using analytic functions, something likewith t1 as (select id, typ, min(m) m1 from t group by id, typ) select id, sum(m1) f from t1 group by id order by 1; ID F ---------- ---------- 1 150 2 100 3 150 4 110 5 60 6 100 7 70
select id, sum(m) over (partition by distinct typ) F -- this does not work. It's only an idea how it might look like from t group by id;
This is firstly a collection with the id, type with calculation of the min for each id, type the combination.
By subsequently for each id of the sum of the minutes (for each combination of id, type for this particular id) is summarized.select distinct id, sum(min(m)) over (partition by id) from data group by id, typ order by id
Published by: chris227 on 15.03.2013 07:39
-
Here is an example of the table data:
I would get this:ID NAME Start 1 SARA 01-JAN-2006 2 SARA 03-FEB-2006 3 LAMBDA 21-MAR-2006 4 SARA 13-APR-2006 5 LAMBDA 01-JAN-2007 6 LAMBDA 01-SEP-2007
I tried using partition and run the function but partition name combines all the lines of Sara and Lambda lines into a single group/partition that is not I am trying to get.Name Start Stop SARA 01-JAN-2006 20-MAR-2006 LAMBDA 21-MAR-2006 12-APR-2006 SARA 13-APR-2006 31-DEC-2006 LAMBDA 01-JAN-2007 <null>
Is there an analytic function or other means to achieve to combine date ranges only when the same person appeared conescutively?
Thank you.This can be easily achieved using tabibitosan:
First of all, you need to identify 'groups', that each name in the list belongs
with sample_data as (select 1 id, 'SARA' name, to_date('01/01/2006', 'dd/mm/yyyy') start_date from dual union all select 2 id, 'SARA' name, to_date('03/02/2006', 'dd/mm/yyyy') start_date from dual union all select 3 id, 'LAMBDA' name, to_date('21/03/2006', 'dd/mm/yyyy') start_date from dual union all select 4 id, 'SARA' name, to_date('13/04/2006', 'dd/mm/yyyy') start_date from dual union all select 5 id, 'LAMBDA' name, to_date('01/01/2007', 'dd/mm/yyyy') start_date from dual union all select 6 id, 'LAMBDA' name, to_date('01/09/2007', 'dd/mm/yyyy') start_date from dual) select id, name, start_date, lead(start_date, 1, to_date('31/12/9999', 'dd/mm/yyyy')) over (order by start_date) next_start_date, row_number() over (order by start_date) - row_number() over (partition by name order by start_date) grp from sample_data; ID NAME START_DATE NEXT_START_DATE GRP ---------- ------ ---------- --------------- ---------- 1 SARA 01/01/2006 03/02/2006 0 2 SARA 03/02/2006 21/03/2006 0 3 LAMBDA 21/03/2006 13/04/2006 2 4 SARA 13/04/2006 01/01/2007 1 5 LAMBDA 01/01/2007 01/09/2007 3 6 LAMBDA 01/09/2007 31/12/9999 3
You can see the group number is generated by comparing the rownumber overall of all lines (in order) with the rownumber of the rowset by name (in the same order) - when there is a gap because another name appears between the two, the group number changes.
Once you have identified the number of group for each set of rows, it is easy to find the min / max values in this group:
with sample_data as (select 1 id, 'SARA' name, to_date('01/01/2006', 'dd/mm/yyyy') start_date from dual union all select 2 id, 'SARA' name, to_date('03/02/2006', 'dd/mm/yyyy') start_date from dual union all select 3 id, 'LAMBDA' name, to_date('21/03/2006', 'dd/mm/yyyy') start_date from dual union all select 4 id, 'SARA' name, to_date('13/04/2006', 'dd/mm/yyyy') start_date from dual union all select 5 id, 'LAMBDA' name, to_date('01/01/2007', 'dd/mm/yyyy') start_date from dual union all select 6 id, 'LAMBDA' name, to_date('01/09/2007', 'dd/mm/yyyy') start_date from dual), tabibitosan as (select id, name, start_date, lead(start_date, 1, to_date('31/12/9999', 'dd/mm/yyyy')) over (order by start_date) next_start_date, row_number() over (order by start_date) - row_number() over (partition by name order by start_date) grp from sample_data) select name, min(start_date) start_date, max(next_start_date) stop_date from tabibitosan group by name, grp order by start_date; NAME START_DATE STOP_DATE ------ ---------- ---------- SARA 01/01/2006 21/03/2006 LAMBDA 21/03/2006 13/04/2006 SARA 13/04/2006 01/01/2007 LAMBDA 01/01/2007 31/12/9999
If you want the date to appear as null max, you will need to use a cast or decode to change it - I'll leave that as an exercise for you to do! I'll also let you to find how to get the day before for the stop_date.
-
Table with 2 columns pro_id, sub_ver_id need only 5 pro_id for each sub_ver_id.
SQL > select * from test1 by SUB_VER_ID;
PRO_ID SUB_VER_ID
---------- ----------
1 0
2 0
3 0
4 0
5 0
6 0
10 1
15 1
16 1
11 1
1 of 12
PRO_ID SUB_VER_ID
---------- ----------
13 1
1 of 14
11 2
3 of 12
.............................
I'm new to the analytical function, I received the request in the form below, but not able to get an idea to limit the SRLNO to only 5 lines for each SUB_VER_ID. Any advice would be much appreciated.
Select distinct sub_ver_id, pro_id, row_number () over (order by sub_ver_id) srlno
from test1 by sub_ver_idCan be as below...
select * from ( select sub_ver_id,pro_id, row_number () over (partition by sub_ver_id order by null) srlno from test1 ) where srlno <=5 order by sub_ver_id
Thank you...
-
Need help with Group functions
I'm a total novice with SQL, so please forgive me if the answer to my question seems to be too obvious
I work with diagrams of the sample (in particular with the employees table):
DESC employees;
result
What I have to do is select all the managers, including the number of subordinates is higher than the average number of subordinates of managers who work in the same Department. What I've done so far is as follows:
SELECT mgr.employee_id manager_id, Director of mgr.last_name, mgr.department_id, COUNT (emp.employee_id)
Employees emp employees JOIN Bishop
ON emp.manager_id = mgr.employee_id
GROUP OF mgr.employee_id, mgr.last_name, mgr.department_id
ORDER BY mgr.department_id;
result
As you can see, I'm almost done. Now, I need only to calculate the average of the result of the COUNT function for each Department. But I'm totally stuck at this point.
All advice?Hello
Welcome to the forum!
user12107811 wrote:
I'm a total novice with SQL, so please forgive me if the answer to my question seems to be too obviousJust the opposite! Looks like a very difficult mission.
I work with diagrams of the sample (in particular with the employees table):
DESC employees;
resultWhat I have to do is select all the managers, including the number of subordinates is higher than the average number of subordinates of managers who work in the same Department. What I've done so far is as follows:
SELECT mgr.employee_id manager_id, Director of mgr.last_name, mgr.department_id, COUNT (emp.employee_id)
Employees emp employees JOIN Bishop
ON emp.manager_id = mgr.employee_id
GROUP OF mgr.employee_id, mgr.last_name, mgr.department_id
ORDER BY mgr.department_id;
resultAs you can see, I'm almost done. Now, I need only to calculate the average of the result of the COUNT function for each Department. But I'm totally stuck at this point.
All advice?Yes, you're almost done. You just need to add one more condition. You have to calculate the average value of total_cnt (the COUNT (*) you already do) of a Department and compare that to total_cnt.
There are several ways to do this, including
a scalar subquery (in a HAVING clause)
(b) make a result set with one line per Department, containing the average_cnt and reach than your current result set
(c) analytical functions. Analytical functions are calculated after the GROUP BY clause is applied and aggregate functions are calculated, it is legitimate to say "AVG (COUNT (*)) MORE (...)").If thinking (c) is the simplest. It involves the use of a query of Tahina, but (a) and (b) also require subqueries.
This sounds like homework, so I'll do it for you.
Instead, here is a very similar problem with the hr.employees table.
Let's say that we are interested in total wages given each type of work in each Department.SELECT department_id , job_id , SUM (salary) AS sum_sal FROM hr.employees GROUP BY department_id , job_id ORDER BY department_id , job_id ;
Results:
DEPARTMENT_ID JOB_ID SUM_SAL ------------- ---------- ---------- 10 AD_ASST 4400 20 MK_MAN 13000 20 MK_REP 6000 30 PU_CLERK 13900 30 PU_MAN 11000 40 HR_REP 6500 50 SH_CLERK 64300 50 ST_CLERK 55700 50 ST_MAN 36400 60 IT_PROG 28800 70 PR_REP 10000 80 SA_MAN 61000 80 SA_REP 243500 90 AD_PRES 24000 90 AD_VP 34000 100 FI_ACCOUNT 39600 100 FI_MGR 12000 110 AC_ACCOUNT 8300 110 AC_MGR 12000 SA_REP 7000
Now suppose we want to find out which of these sum_sals is higher than the average sum_sal of his Department.
For example, in detriment 110 (near the end OIF the list) there two types of work (AC_ACCOUND and AC_MGR) that have sum_sals of 8300 and 12000. The average of these two numbers is 10150, so we selected AC_MGR (because its sum_sal, 12000, is superior to 10150, and we do not want to include AC_ACCOUNT, because its sum_sal, 8300, is less than or equal to the average of the Department.
In departments where there is only one job type (for example, Department 70, or null "Department" at the end of the list above) the only sum_sal will be the average; and because the sum_sal is not greater than the average, we want to exclude this line.Let's start with the calculation of the avg_sum_sal using the analytical function AVG:
SELECT department_id , job_id , SUM (salary) AS sum_sal , AVG (SUM (salary)) OVER (PARTITION BY department_id) AS avg_sum_sal FROM hr.employees GROUP BY department_id , job_id ORDER BY department_id , job_id ;
Output:
DEPARTMENT_ID JOB_ID SUM_SAL AVG_SUM_SAL ------------- ---------- ---------- ----------- 10 AD_ASST 4400 4400 20 MK_MAN 13000 9500 20 MK_REP 6000 9500 30 PU_CLERK 13900 12450 30 PU_MAN 11000 12450 40 HR_REP 6500 6500 50 SH_CLERK 64300 52133.3333 50 ST_CLERK 55700 52133.3333 50 ST_MAN 36400 52133.3333 60 IT_PROG 28800 28800 70 PR_REP 10000 10000 80 SA_MAN 61000 152250 80 SA_REP 243500 152250 90 AD_PRES 24000 29000 90 AD_VP 34000 29000 100 FI_ACCOUNT 39600 25800 100 FI_MGR 12000 25800 110 AC_ACCOUNT 8300 10150 110 AC_MGR 12000 10150 SA_REP 7000 7000
Now all we have to do is to compare the sum_sal and avg_sum_sal columns.
Given that the analytic functions are calculated after the WHERE clause is applied, we cannot use avg_sum_sal in the WHERE clause of the query, even where it has been calculated. But we can do that in a subquery; Then, we can use avg_sum_sal in any way that we love in the Super-requete:WITH got_avg_sum_sal AS ( SELECT department_id , job_id , SUM (salary) AS sum_sal , AVG (SUM (salary)) OVER (PARTITION BY department_id) AS avg_sum_sal FROM hr.employees GROUP BY department_id , job_id ) SELECT department_id , job_id , sum_sal FROM got_avg_sum_sal WHERE sum_sal > avg_sum_sal ORDER BY department_id , job_id ;
Results:
DEPARTMENT_ID JOB_ID SUM_SAL ------------- ---------- ---------- 20 MK_MAN 13000 30 PU_CLERK 13900 50 SH_CLERK 64300 50 ST_CLERK 55700 80 SA_REP 243500 90 AD_VP 34000 100 FI_ACCOUNT 39600 110 AC_MGR 12000
-
With the help of analytical functions above and follow
Hello
Assume, I the date as follows:
Now I want to gerenate a report based on the above data as below:customerid orderid orderdate ------------- ---------- -------------- xyz 1 01/10/2010 xyz 2 02/11/2010 xyz 3 03/12/2011 xyz 4 03/01/2011 xyz 5 03/02/2011 xyz 6 03/03/2011 abc 7 10/09/2010 abc 8 10/10/2010 abc 9 10/11/2010 abc 10 10/01/2011 abc 11 10/02/2011 abc 12 10/03/2011
CustomerID, number of orders placed in the last 30 days of the new year (01/01/2011), no orders placed with 60 during the last days of the new year, no.. orders placed in the last 90 days of the new year, no orders placed within 30 days of the new year, no.. orders placed within 60 days of the new year, no.. orders placed within 90 days of the new year
I am trying to do this using the following code, but could not succeed:
Kindly help. Thanks in advance.select c.customerid, count(*) over (partition by c.customerid order by c.orderdate RANGE interval '30' DAY PRECEDING) as "Last 1 month", count(*) over (partition by c.customerid order by c.orderdate RANGE interval '60' DAY PRECEDING) as "Last 2 months", count(*) over (partition by c.customerid order by c.orderdate RANGE interval '90' DAY PRECEDING) as "Last 3 months", count(*) over (partition by c.customerid order by c.orderdate RANGE interval '30' DAY FOLLOWING) as "Following 1 month", count(*) over (partition by c.customerid order by c.orderdate RANGE interval '60' DAY FOLLOWING) as "Following 2 months", count(*) over (partition by c.customerid order by c.orderdate RANGE interval '90' DAY FOLLOWING) as "Following 3 months" from customer_orders c where orderdate < to_date('01/01/2011','dd/mm/yyyy')
Published by: 858747 on May 13, 2011 03:40
Published by: BluShadow on May 13, 2011 11:57
addition of {noformat}{noformat} tags to retain formatting. Please read: {message:id=9360002}
-
order by with analytic function
Hi gurus
Need your help again.
I have the following data.
Examples of data
Select * from
(
As with a reference
(
Select ' 100 ', ' 25' grp lb, to_date('2012-03-31') ter_dt, 'ABC' package_name FROM DUAL union all
Select ' 100 ', ' 19', to_date ('2012-03-31'), 'AA' OF the whole union DOUBLE
Select ' 200 ', ' 25', to_date('2012-03-31'), 'CC' FROM DUAL union all
Select ' 300 ', ' 28', to_date('2012-03-31'), 'XX' from DUAL union all
Select ' 300 ', ' 28', to_date('4444-12-31'), 'XY' from DUAL
)
Select the grp, lb, ter_dt, Package_name
ROW_NUMBER() over (partition by order of grp by case when lb = '19' then 1)
When lb = '25' then 2
ro_nbr end)
Reference)
-where ro_nbr = 1
;
-----------
The query above returns the following result:
Existing query result
GRP LB TER_DT package_name RO_NBR
100 19 03/12/31 AA 1 100 25 03/12/31 ABC 2 200 25 03/12/31 CC 1 300 28 03/12/31 XX 1 300 28 44 12-31 XY 2 If you can see the data above then I use the order clause with function row_number analytic and prioritize data according to LB using the order by clause.
Now the problem is I need simple stored against each group so I write the following query:
Query
Select * from
(
As with a reference
(
Select ' 100 ', ' 25' grp lb, to_date('2012-03-31') ter_dt, 'ABC' package_name FROM DUAL union all
Select ' 100 ', ' 19', to_date ('2012-03-31'), 'AA' OF the whole union DOUBLE
Select ' 200 ', ' 25', to_date('2012-03-31'), 'CC' FROM DUAL union all
Select ' 300 ', ' 28', to_date('2012-03-31'), 'XX' from DUAL union all
Select ' 300 ', ' 28', to_date('4444-12-31'), 'XY' from DUAL
)
Select the grp, lb, ter_dt, Package_name
ROW_NUMBER() over (partition by order of grp by case when lb = '19' then 1)
When lb = '25' then 2
ro_nbr end)
Reference)
where ro_nbr = 1
;
The query result
GRP LB TER_DT RO_NBR
100 19 03/12/31 AA 1 200 25 03/12/31 CC 1 300 28 03/12/31 XX 1 My required result is that 300 GRP contains 2 folders and I need the record with the latest means of ter_dt and right now, I only get the latest.
My output required
GRP LB TER_DT RO_NBR
100 19 03/12/31 AA 1 200 25 03/12/31 CC 1 300 28 44 12-31 XY 1 Please guide. Thank you
Hello
The query you posted is the ro_nbr assignment based on nothing other than lb. When there are 2 or more lines that have an equal claim to get assigned ro_nbr = 1, then one of them is chosen arbitrarily. If, when a tie like that occurs, you want the number 1 to be assigned based on some sort, and add another expression of Analytics ORDER BY clause, like this:
WITH got_ro_nbr AS
(
SELECT the grp, lb, ter_dt, nom_package
ROW_NUMBER () OVER (PARTITION BY grp
ORDER OF CASES
WHEN lb = '19' THEN 1
WHEN lb = '25' THEN 2
END
, ter_dt DESC-* NEW *.
) AS ro_nbr
REFERENCE
)
SELECT the grp, lb, ter_dt, nom_package
OF got_ro_nbr
WHERE ro_nbr = 1
;
-
Need help with the function or metric derivative to calculate percentages of threshold for a measure
Hi, first post to the community that I am a n00b Foglight needing help.
A thing (in fact the only thing) I like Microsoft SCOM is how this graph of the availability of a metric, and I want to do the same thing in Foglight. I understand that this could be a derived measure or a function, I need, but am a bit lost right now.
Let's say I have a metric and created thresholds as follows: normal included 0, 50 inclusive, warning critical 75 inclusive, fatal 100 inclusive + 9999 included. The metric is measured every so often and more often (99%) of the time it's normal. I want to visually represent that fact, together with the percentage of time that he spends in the strips of quick, critical and fatal alert threshold.
For the dashboard but mainly reports I am looking for the percentage of time that the metric through each of the bands of threshold and put them in some form of chart, preferably very similar to how SCOM it: -.
I would also like to increase this visually with a quantification for the oriented numercally, in order to insert values in the report for clarification would be great too, for example:-Normal:-99% 0.5% warning critical Fatal 0.1% 0.4%
I think of what I have already learned that we have to include a 'blue' band for threshold indefinite in order to operate on a regular basis for any measure.
I do not seem to come up with this concept in Foglight but I think it could be very useful to have something. Any help is most appreciated.
Health and alarms is a standard display which can be used on any object topology. You can access it from the data browser and should also be able to specify this dashboard as a preference in personalized dashboards.
Here it is in a custom dashboard:
Maybe you are looking for
-
Why my App tab disappears after starting upwards?
I tried to set up Twitter on a tab App, but every morning after startup, I have to restore. I use CCleaner at the start, but cannot understand the cookies to keep (if this is the solution).
-
Compaq presario cq60-615dx: need information
I have a compaq presario cq60-615dx serial # [personal information deleted] that needs a motherboard could you please give me a part number?
-
two schemas ip printing to a Multi function device printer.
Hello, I would like to know if I can incorporate a router to allow printing of two PC on different segments (192.168.x.x is and the other is 10.0.x.x) this must be done for security reasons. We want that the two PC of to print on a minolta multifunct
-
How can I fix ' ERROR! CHECK' ACTIVATION
I need to professional advice, no comments from the public. Library staff and I tried several ways to activate Adobe Digital Editions 4.0. We continue to receive the error message. CHECK ACTIVATION when trying to download a book from the Library web
-
Hello worldI would like to know if the Goldengate is compatible with AIX 7?And I would also like to know how can I find the Oracle Goldengate software compatibility matrix?Thnks