Use of analytic functions
Hi gurus,I ask someone to enlighten me on the analytical functions.
I used the following query:
select * from
(select filename,count(regexp_substr(filename,'.*era'))over( partition by regexp_substr(filename,'.*era') ) cnt
FROM
l_x12n_835_fileinfo where partitionnum=76000 and clntsysnum=76500 and radt>='01-JAN-2010')
where cnt>1
The sample result set is:FILENAME CNT
rsmedcalwa.20100105.chpwr.072.era 4
rsmedcalwa.20100105.chpwr.072.era.1 4
rsmedcalwa.20100105.chpwr.072.era.2 4
rsmedcalwa.20100105.chpwr.072.era.3 4
rsmedcalwa.20100105.chpwr.081.era 3
rsmedcalwa.20100105.chpwr.081.era.1 3
rsmedcalwa.20100105.chpwr.081.era.2 3
rsmedcalwa.20100106.chpwr.088.era 3
rsmedcalwa.20100106.chpwr.088.era.1 3
rsmedcalwa.20100106.chpwr.088.era.2 3
rsmedcalwa.20100108.chppr.363.era.3 4
rsmedcalwa.20100108.chppr.363.era.1 4
rsmedcalwa.20100108.chppr.363.era.2 4
rsmedcalwa.20100108.chppr.363.era 4
Now, I changed the query to:select * from
(select filename,count(regexp_substr(filename,'.*era'))over( partition by regexp_substr(filename,'.*era') order by filename ) cnt
FROM
l_x12n_835_fileinfo where partitionnum=76000 and clntsysnum=76500 and radt>='01-JAN-2010')
where cnt>1
The result set has beenFILENAME CNT
rsmedcalwa.20100105.chpwr.072.era.1 2
rsmedcalwa.20100105.chpwr.072.era.2 3
rsmedcalwa.20100105.chpwr.072.era.3 4
rsmedcalwa.20100105.chpwr.081.era.1 2
rsmedcalwa.20100105.chpwr.081.era.2 3
rsmedcalwa.20100106.chpwr.088.era.1 2
rsmedcalwa.20100106.chpwr.088.era.2 3
rsmedcalwa.20100108.chppr.363.era.1 2
rsmedcalwa.20100108.chppr.363.era.2 3
rsmedcalwa.20100108.chppr.363.era.3 4
rsmedcalwa.20100112.chpwr.175.era.1 2
rsmedcalwa.20100112.chpwr.175.era.2 3
(1) I don't understand how the addition of the order by clause changes the count. Could someone explain please?When I change this order by order of regexp_substr(filename,'.*era'), it gives me the correct number.
My requirement is to check how many similar file names I.
(2) if there are any other better elsewhere, please let me know.
Hello
Analytical functions still carried on in a window of the the result set, which can be smaller than the result set.
If you have a PARTITION BY clause, the window for each row includes rows with the same values from all the PARTITION BY expressions.
If you have an ORDER BY clause, the window includes that of a section of consecutive lines in the score, as defined by a windowing clause (in other words, LINES or KEEP them). The default value is "RANGE BETWEEN UNBOUNDED PRECEDING AND LINE CURRENT.
In your case, if the analytical clause is:
over( partition by regexp_substr(filename,'.*era') )
and the line you are looking at a filename = 'rsmedcalwa.20100105.chpwr.072.era.2', the window takes up the entire partition, in other words, all lines whose name includes "rsmedcalwa.20100105.chpwr.072.era". In other words, the window is identical to the partition, because theree is no ORDER BY clause.
But if you add an ORDER BY clause:
over( partition by regexp_substr(filename,'.*era') order by filename )
then the window (potentially) decreases. Since there is no clause window, the default value 'RANGE BETWEEN UNBOUNDED PRECDING AND a CURRENT LINE' is used, which means that only the lines with a lower or equal to "rsmedcalwa.20100105.chpwr.072.era.2" file name (using the normal string comparison) will affect the results.
The analytical ORDER BY clause is required for certain features (such as ROW_NUMBER) significance only in regard to certain commands. For the most part fucntions (including COUNTY), the ORDER BY clause is optional, and you do not have to use a. In this case, it seems that you do not want the effect of a smaller than the partition window, just so, do not use an ORDER byclause of Analytics for this function.
Remember, the analytical ORDER byclause is completely independent the ORDER BY query clause. If you want the result presented in a certain order, use an ORDER BY clause at the end of the query. It will not change the results of the analytical functions.
Tags: Database
Similar Questions
-
Oracle 11g Release 2
I'm assuming that the best solution is the use of analytical functions.
create table test3 ( part_type_id varchar2(50) ,group_id number ,part_desc_id number ,part_cmt varchar2(50) ) / insert into test3 values( 'ABC123',1,10,'comment1'); insert into test3 values( 'ABC123',1,10,'comment2'); insert into test3 values( 'ABC123',2,15,'comment1'); insert into test3 values( 'ABC123',2,15,'comment2'); insert into test3 values( 'EFG123',25,75,'comment3'); insert into test3 values( 'EFG123',25,75,'comment4'); insert into test3 values( 'EFG123',25,75,'comment5'); insert into test3 values( 'XYZ123',1,10,'comment6'); insert into test3 values( 'XYZ123',2,15,'comment7'); commit; select * from test3; PART_TYPE_ID GROUP_ID PART_DESC_ID PART_CMT -------------------- ---------- ------------ -------------------- ABC123 1 10 comment1 ABC123 1 10 comment2 ABC123 2 15 comment1 ABC123 2 15 comment2 EDG123 25 75 comment3 EDG123 25 75 comment4 EDG123 25 75 comment5 XYZ123 1 10 comment6 XYZ123 2 15 comment7 9 rows selected. Desired output: PART_TYPE_ID GROUP_ID PART_DESC_ID PART_CMT -------------------- ---------- ------------ -------------------- ABC123 1 10 comment1 ABC123 2 15 comment1 XYZ123 1 10 comment1 XYZ123 2 15 comment2 RULE: where one part_type_id has multiple (2 or more distinct combinations) of group_id/part_desc_id NOTE: There are about 12 columns in the table, for brevity I only included 4.
Post edited by: orclrunner was updated desired output and rule
Hello
Here's one way:
WITH got_d_count AS
(
SELECT part_type_id, group_id, part_desc_id
MIN (part_cmt) AS min_part_cmt
COUNT AS d_count (*) OVER (PARTITION BY part_type_id)
OF test3
GROUP BY part_type_id, group_id, part_desc_id
)
SELECT DISTINCT
group_id, part_desc_id, part_type_id, min_part_cmt
OF got_d_count
WHERE d_count > 1
;
Output:
GROUP_ID PART_DESC_ID MIN_PART_CMT PART_TYPE_ID
------------ ---------- ------------ ------------
ABC123 1 10 comment1
ABC123 2 15 comment1
XYZ123 1 10 comment6
XYZ123 2 15 comment7
Analytical functions, such as the COUNTY and MIN, many global versions, in addition, it can give the same results. Use the analytical versions when each row of output corresponds to exactly 1 row of input and the aggregate and GROUP BY version when each line of output corresponds to a group of lines 1 or more input. In this issue, each line of output appears to be a group of input lines having the same group_id, part_type_id, and part_desc_id (I'm guessing just, this only has never stated), so I used GROUP BY to get 1 row of output for every input lines.
-
SQL using the analytic function
Hi allI want a help in the creation of my SQL query to retrieve the data described below:
I have a test of sample table containing data as below:
State ID Desc
MICHAEL 1 T1
ACTIVE 2 T2
T3 3 SUCCESS
DISABLE THE T4 4
The thing I want to do is to select all the lines with an ACTIVE status in the table but is there is no ACTIVE status, my request will give me the last line with MICHAEL status.
I can do this in a single request by using the analytical function for example, if yes can yiu help me on the request of unpacking.
Kind regards
Raluce
Something like that?
I had to fix it.
with testdata until)
Select 1 id, "T1" dsc "DISABLED" status of Union double all the
Select 2 id, 'T2' dsc, the status "ACTIVE" of all the double union
Select id 3, "T3" dsc, the status of 'SUCCESS' of all the double union
Select 4 id, "T4" dsc "DISABLED" status of double
)Select
ID
dsc
status
of testdata
where
status =
-case when (select count (*) in testdata where status = 'ACTIVE') > 0
then 'ACTIVE '.
Another 'DISABLED '.
end
and)
ID in (select id from testdata where status = ' ACTIVE')
or
ID = (select max (id) in testdata when status = 'DISABLED')
)STATE ID DSC
'2' 'T2' 'ACTIVE '.
Maybe it's more efficient
Select
ID
dsc
status
of testdata
where
status =
-case when (select count (*) in testdata where status = 'ACTIVE') > 0
then 'ACTIVE '.
Another 'DISABLED '.
end
and
ID =)
-case when (select count (*) in testdata where status = 'ACTIVE') > 0
then id
on the other
(select max (id) in testdata when status = 'DISABLED')
end
)Post edited by: correction of chris227
Post edited by: chris227
extended -
Nth salary using the analytic function
I use under function to calculate second highest with empno and deptno salary.
Is it possible to get the same result with another query without using Assembly only analytical functions condition.using and windows function is possible to get the desired output?
SELECT e.empno,
e.DEPTNO,
tmp. SAL as second_higher_salary
FROM emp e,.
(SELECT Empno,
DEPTNO,
SAL,
DENSE_RANK() (PARTITION BY deptno ORDER of sal) AS rnk
WCP
) tmp
WHERE tmp.deptno = e.deptno
and tmp.rnk = 2
EMPNO DEPTNO SAL
---------- ---------- ----------
7934 10 2450
7782 10 2450
7839 10 2450
7876 20 1100
7369 20 1100
7902 20 1100
7788 20 1100
7566 20 1100
7900 30 1250
7844 30 1250
7654 30 1250
7521 30 1250
7499 30 1250
7698 30 1250
7900 30 1250
7844 30 1250
7654 30 1250
7521 30 1250
7499 30 1250
7698 30 1250
Here's my solution:
Select empno,
DEPTNO,
FIRST_VALUE (sal) (PARTITION BY deptno ORDER by sal desc)
de)
SELECT EmpNo,
DEPTNO,
Decode (DENSE_RANK () OVER (PARTITION BY deptno order by sal desc), 1,-sal, sal) sal
WCP
)
/
EMPNO DEPTNO FIRST_VALUE (SAL) OVER (PARTITIONBYDEPTNOORDERBYSALDESC) ---------- ---------- -----------------------------------------------------
7782 10 2450 7934 10 2450 7839 10 2450 7566 20 2975 7876 20 2975 7369 20 2975 7788 20 2975 7902 20 2975 7499 30 1600 7844 30 1600 7654 30 1600 7521 30 1600 7900 30 1600 7698 30 1600 -
by using the analytical function to get the right output.
Hello all;
I have the following date of sample below
It's the output I wantcreate table temp_one ( id number(30), placeid varchar2(400), issuedate date, person varchar2(400), failures number(30), primary key(id) ); insert into temp_one values (1, 'NY', to_date('03/04/2011', 'MM/DD/YYYY'), 'John', 3); insert into temp_one values (2, 'NY', to_date('03/03/2011', 'MM/DD/YYYY'), 'Adam', 7); insert into temp_one values (3, 'Mexico', to_date('03/04/2011', 'MM/DD/YYYY'), 'Wendy', 3); insert into temp_one values (4, 'Mexico', to_date('03/14/2011', 'MM/DD/YYYY'), 'Gerry', 3); insert into temp_one values (5, 'Mexico', to_date('03/15/2011', 'MM/DD/YYYY'), 'Zick', 9); insert into temp_one values (6, 'London', to_date('03/16/2011', 'MM/DD/YYYY'), 'Mike', 8);
Any help is appreciated. I'll post my request as soon as I can think of a good logic for this...placeid issueperiod failures NY 02/28/2011 - 03/06/2011 10 Mexico 02/28/2011 - 03/06/2011 3 Mexico 03/14/2011 - 03/20/2011 12 London 03/14/2011 - 03/20/2011 8
Hello
user13328581 wrote:
... Please note, I'm still learning how to use analytical functions.It doesn't matter; analytical functions will not help in this problem. The SUM aggregate function is all you need.
But what do you need to GROUP BY? What is the value of each row of the result will represent? A placeid? Yes, each line will represent only placedid, but it will be divided further. You want a separate line of the output for each placeid and every week, then you'll want of the week and GROUP BY placeid. You don't want to GROUP BY the raw issuedate; that would put on 3 March and 4 March in separate groups. And you don't want to GROUP BY failures; This would mean that a line with 3 failures could never be in the same group in line with 9 failures.This becomes the output you posted from the sample data you posted:
SELECT placeid , TO_CHAR ( TRUNC (issuedate, 'IW') , 'MM/DD/YYYY' ) || ' - '|| TO_CHAR ( TRUNC (issuedate, 'IW') + 6 , 'MM/DD/YYY' ) AS issueperiod , SUM (failures) AS sumfailures FROM temp_one GROUP BY placeid , TRUNC (issuedate, 'IW') ;
You can use a subquery to calculate TRUNC (issuedate, 'IW') once. The code would be of about as complicated, efficiency probably will not improve substantially and the results would be the same.
-
How to achieve this using an analytic function - please help
version 10g.
This code works very well with my requirement. I'm tyring learn analytical functions and implement than in the query below. I tried to use row_number,
but I could nt achieve the desired results. Please give me some ideas.
Thank youSELECT c.tax_idntfctn_nmbr irs_number, c.legal_name irs_name, f.prvdr_lctn_iid FROM tax_entity_detail c, provider_detail e, provider_location f, provider_location_detail pld WHERE c.tax_entity_sid = e.tax_entity_sid AND e.prvdr_sid = f.prvdr_sid AND pld.prvdr_lctn_iid = f.prvdr_lctn_iid AND c.oprtnl_flag = 'A' AND c.status_cid = 2 AND e.oprtnl_flag = 'A' AND e.status_cid = 2 AND (c.from_date) = (SELECT MAX (c1.from_date) FROM tax_entity_detail c1 WHERE c1.tax_entity_sid = c.tax_entity_sid AND c1.oprtnl_flag = 'A' AND c1.status_cid = 2) AND (e.from_date) = (SELECT MAX (c1.from_date) FROM provider_detail c1 WHERE c1.prvdr_sid = e.prvdr_sid AND c1.oprtnl_flag = 'A' AND c1.status_cid = 2) AND pld.oprtnl_flag = 'A' AND pld.status_cid = 2 AND (pld.from_date) = (SELECT MAX (a1.from_date) FROM provider_location_detail a1 WHERE a1.prvdr_lctn_iid = pld.prvdr_lctn_iid AND a1.oprtnl_flag = 'A' AND a1.status_cid = 2)
Published by: new learner on May 24, 2010 07:53
Published by: new learner on May 24, 2010 10:50Can be like that not tested...
Select *.
Of
(SELECT c.tax_idntfctn_nmbr irs_number, c.legal_name irs_name,
f.prvdr_lctn_iid, c.from_date as c_from_date, max (c.from_date) more (partition c.tax_entity_sid) as max_c_from_date,
e.from_date as e_from_date, max (e.from_date) more (partition e.prvdr_sid) as max_e_from_date,
PLD.from_date as pld_from_date, max (pld.from_date) more (pld.prvdr_lctn_iid partition) as max_pld_from_dateOF tax_entity_detail c,.
e provider_detail
provider_location f,
LDP provider_location_detail
WHERE c.tax_entity_sid = e.tax_entity_sid
AND e.prvdr_sid = f.prvdr_sid
AND pld.prvdr_lctn_iid = f.prvdr_lctn_iid
AND c.oprtnl_flag = 'A '.
AND c.status_cid = 2
AND e.oprtnl_flag = 'A '.
AND e.status_cid = 2
AND pld.oprtnl_flag = 'A '.
AND pld.status_cid = 2) X
where c_from_date = max_c_from_date AND e_from_date = max_e_from_date AND
pld_from_date = max_pld_from_date -
[8i] can I use an analytical function, or do I need a subquery?
Hi all...
This should be a quick. I hope I can solve my problem with an analytic function, but I don't know if it's possible. Can I use a subquery if I have to, but I'd really rather not.
Here is a very simple version of what I'm trying to do:
I want the results:CREATE TABLE test123 ( field1 VARCHAR2(10) , field2 VARCHAR2(10) , my_date DATE ); INSERT INTO test123 VALUES ('value1', 'a',TO_DATE('12/31/1900','mm/dd/yyyy')); INSERT INTO test123 VALUES ('value1', 'b',TO_DATE('01/02/2010','mm/dd/yyyy')); INSERT INTO test123 VALUES ('value1', 'c',TO_DATE('01/05/2010','mm/dd/yyyy')); INSERT INTO test123 VALUES ('value2', 'a',TO_DATE('12/31/1900','mm/dd/yyyy')); INSERT INTO test123 VALUES ('value2', 'b',TO_DATE('01/01/2010','mm/dd/yyyy')); INSERT INTO test123 VALUES ('value2', 'c',TO_DATE('01/15/2010','mm/dd/yyyy'));
I started with the following query:FIELD1 FIELD2 -------------- value2 a value2 b value2 c value1 a value1 b value1 c
But the problem is the database has a date of 31 December 1900 ' as default / initial for any date field. I don't want these default values taken into account in my calculation of min. I tried to put a WHERE clause in my analytical function [WHERE my_date <>TO_DATE (' 12/31/1900 ',' mm/dd/yyyy')], but I kept getting an error message "missing right parenthesis", so it seems that you can not have a WHERE clause here... or I'm just something wrong?SELECT field1 , field2 FROM test123 ORDER BY MIN(my_date) OVER ( PARTITION BY field1 ) -- removed DESC here , field2
Moreover, it is a 8i database...
Edited by: user11033437 may 20, 2010 17:16: took the 'DESC' criteria out of my order by clause. In my real application, I need DESC, but not the example.Hello
A WHERE clause excludes rows in the results set. Whenever you want you can have a WHERE clause that was more limited (for example, something that would simply exclude MIN calculating values in the ORDER BY clause), then think CASE:
SELECT field1 , field2 FROM test123 ORDER BY MIN ( CASE WHEN my_date > TO_DATE ( '12/31/1900' , 'MM/DD/YYYY' ) THEN my_date END ) OVER (PARTITION BY field1) DESC , field2 ;
This puts the lines for "Value1" first.
The minimum my_date for "Value1" (after excluding the values of 1900) is later than the minimum for "Value2", so I think that you either made a mistake in the desired output, or you do not want sorted by descending order.As always, thanks for the display of the data of the sample and the results so clearly.
-
Using the analytical function.
Hello
I have this scenario.
We have two locations for a given item_id. Primary and in bulk.with t as ( select 21009 item_id,9 primary_available,450 max_qty,100 bulk_available,12122 bulk_locator_id from dual union all select 21009 item_id,9 primary_available,450 max_qty,2775 bulk_available,8704 bulk_locator_id from dual union all select 21009 item_id,9 primary_available,450 max_qty,524 bulk_available,15614 bulk_locator_id from dual union all select 21009 item_id,9 primary_available,450 max_qty,3300 bulk_available,15654 bulk_locator_id from dual) select t.* from t;
I'm trying to get a select statement out of this point of view, where I will be restock the primary AMOUNT of sites in bulk, BUT the smaller bulk first. Once she gets up, I shouldn't take more product.
There is an analytic function that would do this?
That's the max I could come up with.
So, in this scenario, I want to replen bulk_locator_id 100 ' 12122 'and ' 15614 bulk_locator_id 341'. That's all. ZERO of the other rentals (bulk_locator_id). If the question is not clear, please let me know.with t as ( select 21009 item_id,9 primary_available,450 max_qty,100 bulk_available,12122 bulk_locator_id from dual union all select 21009 item_id,9 primary_available,450 max_qty,2775 bulk_available,8704 bulk_locator_id from dual union all select 21009 item_id,9 primary_available,450 max_qty,524 bulk_available,15614 bulk_locator_id from dual union all select 21009 item_id,9 primary_available,450 max_qty,3300 bulk_available,15654 bulk_locator_id from dual) select t.*, max_qty - (primary_available + SUM(bulk_available) over(PARTITION BY item_id ORDER BY bulk_available)) replen_this_much from t;
Published by: RPuttagunta on September 11, 2009 16:23Hello
Thanks for posting the sample data.
It would be useful that you also posted the output you want. Is this?. BULK_ REPLEN_ ITEM_ PRIMARY_ MAX_ BULK_ LOCATOR_ THIS_ ID AVAILABLE QTY AVAILABLE ID MUCH ----- ---------- ----- ---------- ---------- ---------- 21009 9 450 100 12122 100 21009 9 450 524 15614 341 21009 9 450 2775 8704 0 21009 9 450 3300 15654 0
If so, you can get to this:
SELECT t.* , GREATEST ( 0 , LEAST ( TO_NUMBER (bulk_available) , TO_NUMBER (max_qty) - ( TO_NUMBER (primary_available) + NVL ( SUM (TO_NUMBER (bulk_available)) OVER ( PARTITION BY item_id ORDER BY TO_NUMBER (bulk_available) ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING ) , 0 ) ) ) ) AS replen_this_much FROM t ORDER BY item_id , TO_NUMBER (bulk_available) ;
You should really store your numbers in NUMBER of columns.
You essentially posted all what you need analytical functions. The problem was just wrapping this analytical function (or something very close to it) and LESS and more GRAND, so that the replen_this_much column is always between 0 and TO_NUMBER (bulk_available).
-
By using the analytical function to sort without showing this column in the result.
Hello
We use the Oracle 11.2
How to use Oracle Analytics to sort the output of a query without showing this column in the result?
Here's my query:
Select distinct nvl(SRC_CHR_NM,'0') | » #'|| NVL(EXPL_SRC,'0') | » #'|| NVL(DIM_NM,'0') | » #'|| NVL(DIM_CHR_NM,'0') | » #'|| NVL(DIM_CHR_ABR,'0') | » #'||
Decode (InStr (NVL(SRC_COL_NM,'0'), 'trim ('),'1 ', Replace (NVL(SRC_COL_NM,'0'),'trim (',' trim(PRM_DTL.'), '0',' PRM_DTL.)))) » || NVL(SRC_COL_NM,'0')) AS ALLOFIT,
DIM_NM
from EXPL_CONFIG where SBJ_AREA_CD = 'PRMDTL. '
I want to use analytical to sort by DIM_NM but I do not want to show. I want to just just the ALLOFIT column to show in the output.
Something like "row_number() over (order by DIM_NM desc).
Thank you!
Hello
If you SELECT SEPARATE, then you can only ORDER OF things in the SELECT clause. (Do you really need to do SELECT DISTINCT? Often, it's just an inefficient way to remove the lines that do not need to be there at all).
Move making the SELECT DISTINCT in a subquery and make the ORDER BY (and nothing else) in the main query.
I hope that answers your question.
If this isn't the case, please post a small example data (CREATE TABLE and only relevant columns, INSERT statements) and also publish outcomes from these data.
Explain, using specific examples, how you get these results from these data.
Always say what version of Oracle you are using (for example, 11.2.0.2.0).
See the FAQ forum: https://forums.oracle.com/message/9362002#9362002
-
Thanks in advance to anyone who might help
I am writing a storage report. Try to calculate the use of space over a period of time. I need for a db_nm, tblsp_nm find the collections of the first and the last, and then find the difference in the space_used
The structure of the table is like this
Expected result isdrop table tstg / create table tstg (db_nm varchar(10), tblsp_nm varchar(15), space_used number, collection_time date) / insert into tstg values ( 'EDW', 'SYSTEM',100,to_date('01/07/2011','DD/MM/YYYY')); insert into tstg values ( 'EDW', 'SYSTEM',120,to_date('05/07/2011','DD/MM/YYYY')); insert into tstg values ( 'EDW', 'SYSTEM',150,to_date('10/07/2011','DD/MM/YYYY')); insert into tstg values ( 'EDW', 'SYSAUX',10,to_date('01/07/2011','DD/MM/YYYY')); insert into tstg values ( 'EDW', 'SYSAUX',12,to_date('05/07/2011','DD/MM/YYYY')); insert into tstg values ( 'EDW', 'SYSAUX',15,to_date('10/07/2011','DD/MM/YYYY')); commit;
I useDB_NM TBLSP_NM SPACE_USED COLLECTIO DIFF ---------- --------------- ---------- --------- ---------- EDW SYSAUX 15 10-JUL-11 5 EDW SYSTEM 150 10-JUL-11 50
but gives more lines in the result I wantselect db_nm,tblsp_nm,space_used,collection_time, last_value(space_used) OVER (partition by DB_NM,Tblsp_nm order by collection_time ASC) - first_value(space_used) OVER (partition by DB_NM,Tblsp_nm order by collection_time ASC) diff from tstg
/DB_NM TBLSP_NM SPACE_USED COLLECTIO DIFF ---------- --------------- ---------- --------- ---------- EDW SYSAUX 10 01-JUL-11 0 EDW SYSAUX 12 05-JUL-11 2 EDW SYSAUX 15 10-JUL-11 5 EDW SYSTEM 100 01-JUL-11 0 EDW SYSTEM 120 05-JUL-11 20 EDW SYSTEM 150 10-JUL-11 50
Thank you
EduardoHello
Thanks for the sample data.
Here's a solution using the FIRST/LAST functions:
select db_nm , tblsp_nm , max(collection_time) as collection_time , max(space_used) keep (dense_rank last order by collection_time) as space_used , max(space_used) keep (dense_rank last order by collection_time) - max(space_used) keep (dense_rank first order by collection_time) as diff from tstg group by db_nm , tblsp_nm ;
-
Using Oracle analytic function
Hi all
I use Oracle 10 g Release 10.0.2
Here's the situation:
I visited in my table of looks like this:
Cust_id beg_dt end_dt sg_cd
264321502 1 MAY 97 19 MARCH 98 1
264321502 21 MAY 98 15 OCTOBER 98 6
264321502 20 OCTOBER 98 22 APRIL 99 6
264321502 23 APRIL 99 25 APRIL 00 6
264321502 27 APRIL 00 20 JANUARY 02 6
264321502 25 JANUARY 02 15 MAY 02 6
264321502 MAY 21 02 27 MAY 02 6
264321502 31 MAY 02 17 FEBRUARY 03 6
264321502 21 FEBRUARY 06 03-7.-04 1
264321502 25 FEBRUARY 03 25 FEBRUARY 03 1
264321502 31 MARCH 03 30 APRIL 03 1
264321502 07 - SEP - 04 26 DECEMBER 04 6
264321502 29 DECEMBER 04 3 JANUARY 06 6
264321502 4 JANUARY 06 3 JANUARY 07 12
264321502 4 JANUARY 06 3 JANUARY 07 12
264321502 4 JANUARY 06 3 JANUARY 07 12
I need the results of the query as
Cust_id beg_dt end_dt sg_cd
264321502 1 MAY 97 19 MARCH 98 1
264321502 21 MAY 98 17 FEBRUARY 03 6
264321502 21 FEBRUARY 03 30 APRIL 03 1
264321502 07 - SEP - 04 3 JANUARY 06 6
264321502 4 JANUARY 06 3 JANUARY 07 12
Basically, I need take a min max (end_dt) of sg_cd for this cust id (beg_dt).
Any help is very much appreciated:
My query is like that
Select cust_id, end_dt, beg_dt, sg_cd,
min (beg_dt) more (partition of cust_id, sg_Cd) as new_beg_dt,
Max (end_dt) more (partition of cust_id, sg_cd) as end_Dt_nw
of cust_tab.
can be like that?
1 data
() 2
3 select Cust_id 264321502, to_date('01-MAY-97','dd-mon-rr') beg_dt, to_date('19-MAR-98','dd-mon-rr') end_dt, 1 sg_cd of all the double union
4. Select 264321502, to_date('21-MAY-98','dd-mon-rr'), to_date('15-OCT-98','dd-mon-rr'), 6 Union double all the
5 select 264321502, to_date('20-OCT-98','dd-mon-rr'), to_date('22-APR-99','dd-mon-rr'), 6 Union double all the
6. Select 264321502, to_date('23-APR-99','dd-mon-rr'), to_date('25-APR-00','dd-mon-rr'), 6 Union double all the
7. Select 264321502, to_date('27-APR-00','dd-mon-rr'), to_date('20-JAN-02','dd-mon-rr'), 6 Union double all the
8 select 264321502, to_date('25-JAN-02','dd-mon-rr'), to_date('15-MAY-02','dd-mon-rr'), 6 Union double all the
9. Select 264321502, to_date('21-MAY-02','dd-mon-rr'), to_date('27-MAY-02','dd-mon-rr'), 6 Union double all the
10. Select 264321502, to_date('31-MAY-02','dd-mon-rr'), to_date('17-FEB-03','dd-mon-rr'), 6 Union double all the
11. Select 264321502, to_date('21-FEB-03','dd-mon-rr'), to_date('06-SEP-04','dd-mon-rr'), 1 Union double all the
12 select 264321502, to_date('25-FEB-03','dd-mon-rr'), to_date('25-FEB-03','dd-mon-rr'), 1 Union double all the
13 select 264321502, to_date('31-MAR-03','dd-mon-rr'), to_date('30-APR-03','dd-mon-rr'), 1 Union double all the
14 select 264321502, to_date('07-SEP-04','dd-mon-rr'), to_date('26-DEC-04','dd-mon-rr'), 6 Union double all the
15 select 264321502, to_date('29-DEC-04','dd-mon-rr'), to_date('03-JAN-06','dd-mon-rr'), 6 Union double all the
16 select 264321502, to_date('04-JAN-06','dd-mon-rr'), to_date('03-JAN-07','dd-mon-rr'), 12 Union double all the
17 select 264321502, to_date('04-JAN-06','dd-mon-rr'), to_date('03-JAN-07','dd-mon-rr'), 12 Union double all the
18 select 264321502, to_date('04-JAN-06','dd-mon-rr'), to_date('03-JAN-07','dd-mon-rr'), 12 of the double
(19) - select * from data
20. Select Cust_id, beg_dt, max (end_dt) as end_dt, min (beg_dt), sg_cd
21 of
(22)
23 select x.*, sum (flg) over (order by end_dt Cust_id partition) as grp
24 of
(25)
26 select
27 wagneur, case when lag (sg_cd, 1-9) over (order by end_dt Cust_id partition)! sg_cd = 1 END so that flg
28 from data d
x 29)
x 30)
Group 31 by Cust_id, sg_cd, grp
32 * order of Cust_id, end_dt
CUST_ID BEG_DT END_DT SG_CD
---------- --------- --------- ----------
264321502 1 MAY 97 19 MARCH 98 1
264321502 21 MAY 98 17 FEBRUARY 03 6
264321502 21 FEBRUARY 06 03-7.-04 1
264321502 07 - SEP - 04 3 JANUARY 06 6
264321502 4 JANUARY 06 3 JANUARY 07 12
-
understand row_number() and its use in analytic function
Hello all;
I've been playing with row_number and trying to figure out how to use it, and yet I still can't understand...
I have the following code below
I made a simple select statementcreate table Employee( ID VARCHAR2(4 BYTE) NOT NULL, First_Name VARCHAR2(10 BYTE), Last_Name VARCHAR2(10 BYTE), Start_Date DATE, End_Date DATE, Salary Number(8,2), City VARCHAR2(10 BYTE), Description VARCHAR2(15 BYTE) ) insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description) values ('01','Jason', 'Martin', to_date('19960725','YYYYMMDD'), to_date('20060725','YYYYMMDD'), 1234.56, 'Toronto', 'Programmer'); insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description) values('02','Alison', 'Mathews', to_date('19760321','YYYYMMDD'), to_date('19860221','YYYYMMDD'), 6661.78, 'Vancouver','Tester') insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description) values('03','James', 'Smith', to_date('19781212','YYYYMMDD'), to_date('19900315','YYYYMMDD'), 6544.78, 'Vancouver','Tester') insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description) values('04','Celia', 'Rice', to_date('19821024','YYYYMMDD'), to_date('19990421','YYYYMMDD'), 2344.78, 'Vancouver','Manager') insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description) values('05','Robert', 'Black', to_date('19840115','YYYYMMDD'), to_date('19980808','YYYYMMDD'), 2334.78, 'Vancouver','Tester') insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description) values('06','Linda', 'Green', to_date('19870730','YYYYMMDD'), to_date('19960104','YYYYMMDD'), 4322.78,'New York', 'Tester') insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description) values('07','David', 'Larry', to_date('19901231','YYYYMMDD'), to_date('19980212','YYYYMMDD'), 7897.78,'New York', 'Manager') insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description) values('08','James', 'Cat', to_date('19960917','YYYYMMDD'), to_date('20020415','YYYYMMDD'), 1232.78,'Vancouver', 'Tester')
Select * from employee
and it returns it below
I wrote another select statement with row_number. See belowID FIRST_NAME LAST_NAME START_DAT END_DATE SALARY CITY DESCRIPTION ---- ---------- ---------- --------- --------- ---------- ---------- --------------- 01 Jason Martin 25-JUL-96 25-JUL-06 1234.56 Toronto Programmer 02 Alison Mathews 21-MAR-76 21-FEB-86 6661.78 Vancouver Tester 03 James Smith 12-DEC-78 15-MAR-90 6544.78 Vancouver Tester 04 Celia Rice 24-OCT-82 21-APR-99 2344.78 Vancouver Manager 05 Robert Black 15-JAN-84 08-AUG-98 2334.78 Vancouver Tester 06 Linda Green 30-JUL-87 04-JAN-96 4322.78 New York Tester 07 David Larry 31-DEC-90 12-FEB-98 7897.78 New York Manager 08 James Cat 17-SEP-96 15-APR-02 1232.78 Vancouver Tester
and I get this resultSELECT first_name, last_name, salary, city, description, id, ROW_NUMBER() OVER(PARTITION BY description ORDER BY city desc) "Test#" FROM employee
First_name last_name Salary City Description ID Test# Celina Rice 2344.78 Vancouver Manager 04 1 David Larry 7897.78 New York Manager 07 2 Jason Martin 1234.56 Toronto Programmer 01 1 Alison Mathews 6661.78 Vancouver Tester 02 1 James Cat 1232.78 Vancouver Tester 08 2 Robert Black 2334.78 Vancouver Tester 05 3 James Smith 6544.78 Vancouver Tester 03 4 Linda Green 4322.78 New York Tester 06 5
What is throwing me, that's the order of and how these numbers are assigned. Why isI understand the partition by which means basically for each associated group a unique number wiill be assigned for that row, so in this case since tester is one group, manager is another group, and programmer is another group then tester gets its own unique number for each row, manager as well and etc.
1 assigned to Alison Mathews for the tester group and 2 assigned to James Cat and assigned 3 Robert Black
I apologize if this is a stupid question, I tried to read about this online and looking at the oracle documentation, but still do not understand why.user13328581 wrote:
What is throwing me, that's the order of and how these numbers are assigned. Why is1 assigned to Alison Mathews for the tester group and 2 assigned to James Cat and assigned 3 Robert Black
Description (partition by column) and city (control column) values are same for Alison, James and Robert, no sort order of these 3 records be held valid.
Oracle just happened to choose a. What do you think should be the numbering "correct"? -
update the table by using the analytic function
I have a table like this one:
I want a single statement that will update seq_no with a unique number for each test in seq_no insert valuecreate table fred (test varchar2(50), seq_no number(18,0)); insert into fred (test) values ('A'); insert into fred (test) values ('A'); insert into fred (test) values ('A'); insert into fred (test) values ('B'); insert into fred (test) values ('C'); insert into fred (test) values ('D'); insert into fred (test) values ('D'); insert into fred (test) values ('D'); insert into fred (test) values ('D'); insert into fred (test) values ('E'); insert into fred (test) values ('E'); insert into fred (test) values ('E');
for example like this:
I am sure that this should be an easy thing, but for the life of me I can't figure out how do it in a single statement. I should know this... hangs head in shameSQL 001> select test, row_number() over (partition by test order by test) from fred 2 / TEST ROW_NUMBER()OVER(PARTITIONBYTESTORDERBYTEST) -------------------------------------------------- -------------------------------------------- A 1 A 2 A 3 B 1 C 1 D 1 D 2 D 3 D 4 E 1 E 2 E 3
Use MERGE
SQL> merge into fred 2 using (select rowid rid, test, row_number() over (partition by test order by test) sno 3 from fred) fred1 4 on (fred.rowid = fred1.rid) 5 when matched then update 6 set seq_no = sno 7 / 12 rows merged. SQL> select * from fred 2 / TEST SEQ_NO -------------------------------------------------- ---------- A 1 A 2 A 3 B 1 C 1 D 1 D 4 D 3 D 2 E 1 E 2 TEST SEQ_NO -------------------------------------------------- ---------- E 3 12 rows selected.
-
How to achieve using the analytical functions - please help
Oracle version: 10g
-999 means that all THE values for this keywith input_parameters as (select 10 filter_key , '10ACCC' filter_value from dual union all select 50 filter_key ,'10ACCC0001' filter_value from dual union all select 60 filter_key , 'PIP' filter_value from dual union all select 70 filter_key , 'A' filter_value from dual) select * from input_parameters; with profile_search as( select 100 profile_id , 10 filter_key , '10ACCC' filter_value from dual union all select 100 profile_id , 50 filter_key , '10ACCC0001' filter_value from dual union all select 100 profile_id , 50 filter_key , '10ACCC0002' filter_value from dual union all select 100 profile_id , 60 filter_key , '999' filter_value from dual union all select 100 profile_id , 70 filter_key , '999' filter_value from dual union all select 101 profile_id , 10 filter_key , '10ACCC' filter_value from dual union all select 101 profile_id , 50 filter_key , '10ACCC001' filter_value from dual union all select 101 profile_id , 60 filter_key , 'PIP' filter_value from dual union all select 101 profile_id , 70 filter_key , '999' filter_value from dual union all select 102 profile_id , 10 filter_key , '10ACCC' filter_value from dual union all select 102 profile_id , 50 filter_key , '10ACCC0001' filter_value from dual union all select 102 profile_id , 60 filter_key , 'PIP' filter_value from dual union all select 102 profile_id , 70 filter_key , 'A' filter_value from dual) select filter_key , wm_concat(filter_value) from profile_search group by profile_id, filter_key ;
70 KEY HAS THE HIGHEST WEIGHT,need to identify profile that matches input parameters --------------------------------------------------------------------------- 102 is first match because it matches exactly 101 is second match because 999 can match any value 100 is third match
A next highest weight KEY 60 and 50 next highest weight BUTTON
required results:
profile_id: 102
Published by: devarade on January 19, 2010 20:01
Published by: devarade on January 19, 2010 20:01I guess that there is a typing error in your sample:
Select 101 profile_id, 50 filter_key, '10ACCC001' filter_value from all the double union
must be:
Select 101 profile_id, 50 filter_key, '01' filter_value from all 10ACCC00 the double union
Then:
with input_parameters as ( select 10 filter_key , '10ACCC' filter_value from dual union all select 50 filter_key ,'10ACCC0001' filter_value from dual union all select 60 filter_key , 'PIP' filter_value from dual union all select 70 filter_key , 'A' filter_value from dual ), profile_search as( select 100 profile_id , 10 filter_key , '10ACCC' filter_value from dual union all select 100 profile_id , 50 filter_key , '10ACCC0001' filter_value from dual union all select 100 profile_id , 50 filter_key , '10ACCC0002' filter_value from dual union all select 100 profile_id , 60 filter_key , '999' filter_value from dual union all select 100 profile_id , 70 filter_key , '999' filter_value from dual union all select 101 profile_id , 10 filter_key , '10ACCC' filter_value from dual union all select 101 profile_id , 50 filter_key , '10ACCC0001' filter_value from dual union all select 101 profile_id , 60 filter_key , 'PIP' filter_value from dual union all select 101 profile_id , 70 filter_key , '999' filter_value from dual union all select 102 profile_id , 10 filter_key , '10ACCC' filter_value from dual union all select 102 profile_id , 50 filter_key , '10ACCC0001' filter_value from dual union all select 102 profile_id , 60 filter_key , 'PIP' filter_value from dual union all select 102 profile_id , 70 filter_key , 'A' filter_value from dual ) select profile_id, sum(direct_match_cnt) || ' direct matches out of ' || (select count(*) from input_parameters) match from ( select b.profile_id, b.filter_key, b.filter_value, sum(case when b.filter_value = a.filter_value then 1 else 0 end) direct_match_cnt from input_parameters a, profile_search b where b.filter_key = a.filter_key and (b.filter_value = a.filter_value or b.filter_value = '999') group by b.profile_id, b.filter_key, b.filter_value ) x group by profile_id having count(*) = (select count(*) from input_parameters) order by sum(direct_match_cnt) desc / PROFILE_ID MATCH ---------- ---------------------------------------- 102 4 direct matches out of 4 101 3 direct matches out of 4 100 2 direct matches out of 4 SQL>
SY.
-
date ranges - possible to use analytical functions?
The following datastructure must be converted into a daterange datastructure.
Working solution:START_DATE END_DATE AMMOUNT ---------- ---------- ---------- 01-01-2010 28-02-2010 10 01-02-2010 31-03-2010 20 01-03-2010 31-05-2010 30 01-09-2010 31-12-2010 40
Output:with date_ranges as ( select to_date('01-01-2010','dd-mm-yyyy') start_date , to_date('28-02-2010','dd-mm-yyyy') end_date , 10 ammount from dual union all select to_date('01-02-2010','dd-mm-yyyy') start_date , to_date('31-03-2010','dd-mm-yyyy') end_date , 20 ammount from dual union all select to_date('01-03-2010','dd-mm-yyyy') start_date , to_date('31-05-2010','dd-mm-yyyy') end_date , 30 ammount from dual union all select to_date('01-09-2010','dd-mm-yyyy') start_date , to_date('31-12-2010','dd-mm-yyyy') end_date , 40 ammount from dual ) select rne.start_date , lead (rne.start_date-1,1) over (order by rne.start_date) end_date , ( select sum(dre2.ammount) from date_ranges dre2 where rne.start_date >= dre2.start_date and rne.start_date <= dre2.end_date ) range_ammount from ( select dre.start_date from date_ranges dre union -- implicit distinct select dre.end_date + 1 from date_ranges dre ) rne order by rne.start_date /
However, I would like to use an analytical function to calculate the range_ammount. Is this possible?START_DATE END_DATE RANGE_AMMOUNT ---------- ---------- ------------- 01-01-2010 31-01-2010 10 01-02-2010 28-02-2010 30 01-03-2010 31-03-2010 50 01-04-2010 31-05-2010 30 01-06-2010 31-08-2010 01-09-2010 31-12-2010 40 01-01-2011 7 rows selected.
Published by: user5909557 on July 29, 2010 06:19Hello
Welcome to the forum!
Yes, you can replace the scalar sub-queriy with a SUMMARY, like this:
WITH change_data AS ( SELECT start_date AS change_date , ammount AS net_amount FROM date_ranges -- UNION -- SELECT end_date + 1 AS change_date , -ammount AS net_amount FROM date_ranges ) , got_range_amount AS ( SELECT change_date AS start_date , LEAD (change_date) OVER (ORDER BY change_date) - 1 AS end_date , SUM (net_amount) OVER (ORDER BY change_date) AS range_amount FROM change_data ) , got_grp AS ( SELECT start_date , end_date , range_amount , ROW_NUMBER () OVER ( ORDER BY start_date, end_date) - ROW_NUMBER () OVER ( PARTITION BY range_amount ORDER BY start_date, end_date ) AS grp FROM got_range_amount ) SELECT MIN (start_date) AS start_date , MAX (end_date) AS end_date , range_amount FROM got_grp GROUP BY grp , range_amount ORDER BY grp ;
This should be much more effective.
The code is longer than what you posted. It is largely because it includes consecutive groups with the same amount.
For example, if you add this line the sample data:-- union all select to_date('02-01-2010','dd-mm-yyyy') start_date , to_date('30-12-2010','dd-mm-yyyy') end_date , 0 ammount from dual
The query that you posted the product:
START_DAT END_DATE RANGE_AMMOUNT --------- --------- ------------- 01-JAN-10 01-JAN-10 10 02-JAN-10 31-JAN-10 10 01-FEB-10 28-FEB-10 30 01-MAR-10 31-MAR-10 50 01-APR-10 31-MAY-10 30 01-JUN-10 31-AUG-10 0 01-SEP-10 30-DEC-10 40 31-DEC-10 31-DEC-10 40 01-JAN-11
I suppose you want only a new production line where the changes of range_amount., it is:
START_DAT END_DATE RANGE_AMOUNT --------- --------- ------------ 01-JAN-10 31-JAN-10 10 01-FEB-10 28-FEB-10 30 01-MAR-10 31-MAR-10 50 01-APR-10 31-MAY-10 30 01-JUN-10 31-AUG-10 0 01-SEP-10 31-DEC-10 40 01-JAN-11 0
Of course, you can change the original query so that it did, but it would eventually just as complex as the above query, but less effective.
Conversely, if you prefer the longer output, then you need not got_grp Tahina-query in the above query.Thanks for posting the CREATE TABLE and INSERT statements; It is very useful.
There are people who use this forum for years and have yet to be begged to do.
Maybe you are looking for
-
I cannot share the images and photos on flickr
Only after the export of images to another can folder (on the desktop to facilitate the conclusion) I upload to steal flickr account. On the old iPhotos, I used to click on share and there they went. I want to download on flickr via sharing. I have a
-
on some Web pages only you use firefox, the display text is sometimes glitched.
OK, sometimes the text is glitched on Web pages. and if you highlight the text messed up it will be instantly normal. also have the most updated drivers. and this only happens with firefox.
-
Satellite A40: Question about product dvd recovery
Hello Please help me on my computer toshiba Satellite A40 laptop...I want to reformat my HARD drive with operating system XP... My first question is, it installs my XP OS after that I run the recovery cd?I'm afraid that after I run the recovery cd, I
-
Hi all I can't display the values that I want. I have a value double say 320 sent to a digital indicator where I edited the properties of it to display in a time format that contains only minutes and seconds as Yes, 3:20 '. Unfortunately, it takes th
-
Hello I'm running a windows 2012 wsus server. I wanted to install kb3095113 but I get the following error: ' Installation failure: Windows failed to install the following update with error 0x8000FFFF: update for Windows (KB3095113).» Searching the ne